open-E JovianDSS License Calculator

Select system type / architecture

Single node

Simple architecture with one server

Drives can be SAS or SATA. When using JBODs, SAS JBODs and SAS cables are required.

Shared storage HA Cluster

Single storage shared between the nodes

Common storage (internal, JBODs, JBOFs, etc.), both nodes are directly connected to all storage devices in the cluster at the same time. When using JBODs, SAS JBODs and SAS cables are required.

Non-shared storage HA Cluster

Each node has its own storage.

Each node has direct access only to its own storage devices. Nodes communicate with each other to access their storage counterparts. When using JBODs, SAS JBODs and SAS cables are required.

Only "2-way Mirror" and "4-way Mirror" redundancy types are allowed for "Non-shared storage HA clusters". To change the system type to the "Non-shared storage HA Cluster", ensure that all pools have a suitable redundancy type and make the calculation again.

Too many pools to change the system type

Too many zpools exist in the configuration. The maximum number of supported pools in HA Cluster is 3. To change the system type to the "Non-shared storage HA Cluster", delete redundant pools.

Pool-0 Clone | Remove

Calculation parameters

Please fill out fields below.

based on required storage capacity

based on number of disks and data groups

TiB
TB

Help me choose

Usable data storage capacity: 0 TiB

Total disks in data groups: 0 disks

Single disk capacity: 0 disks

Redundancy level: RAID-0

Number of data groups: 0

Disks in data group: 0

Detailed calculations
Zpool storage characteristics

Usable data storage capacity: 0 TiB

Total number of data disks: 0 disks

Number of data groups: 0 groups

Disks in data group: 0 disks

Disk groups layout:

Data disk

Parity disk

Detailed storage calculations

What each value means?

Total storage capacity: 0.00 TiB

(0 disks x 0TB = 0.00TB = 0.00TiB)

Storage capacity after RAID is applied: 0.00 TiB

(0 disks x 0TB = 0.00TB = 0.00TiB)

Usable data storage capacity: 0.00 TiB

(0.00TiB x 0.9 = 0.00TiB)

Expected annual zpool reliability

The expected period of time for a year when zpool works in a reliable way, given the configuration and the possibility of disk failure.

00.0%

Not recommended! Storage solutions with expected annual reliability below 99.0% should not be considered for any production deployment.

Zpool reliability parameters

Mean Time To Recovery (MTTR) in days: 0

Mean Time Between Failures (MTBF) in hours: 0

Change parameters

Zpool performance rating

Read Performance Rating:

1 10

0.0 x single disk

Write Performance Rating:

1 10

0.0 x single disk

Space efficiency

The ratio shows the percentage of the total zpool capacity to be reserved for data.

00%

Zpool capacity reserved for data

0.00TiB x 100% / 0.00TiB = 0%

Non-data groups

Non data groups do not affect the size of license storage capacity regardless of their amount and capacity.

Write log disks: 0x 0GB

Read cache disks: 0x 0GB

Spare disks: 0x 0TB

Total number of non-data disks 0

Edit non data groups

Open-E JovianDSS Storage and RAID Calculator
Select data group redundancy level

Please choose redundancy type.

NO REDUNDANCY

1 disk per group

There are no parity disks, the total capacity equals the capacity of all disks.

Suitability for mission critical solutions:

NO REDUNDANCY

This redundancy level is not allowed for selected system architecture.

A group consists only of a SINGLE disk. This configuration in scope of the Pool behaves as a regular RAID-0.

The "No redundancy" configuration DOES NOT accept any disk failures. This configuration should not be used for mission critical applications at all!

It is also recommended not to exceed 8 of SINGLE disks in the Pool because a single disk damage results in the destruction of the whole Pool. The chances of suffering disk failures increase with the number of disks in the Pool.

Important
The pool performance with SINGLE drive in each group is the highest possible and is increasing with the number of groups (disks) in the pool. For mission critical applications, it is recommended to use RAID-Z2 or RAID-Z3, or 3-way mirror instead of "No redundancy".

This configuration can be used with Hardware RAID volumes where redundancy is preserved on a hardware level.

2-WAY MIRROR

2 disks per group

1 disk is a parity disk. Total capacity equals the capacity of 1 disk.

Suitability for mission critical solutions:

2-WAY MIRROR

The chances of suffering multiple disk failures increase with number of MIRRORs in the Pool.

The 2-WAY MIRROR accepts single disk failure only per MIRROR group.
MIRRORs can be used for mission critical applications, but it is recommended not to exceed 12 MIRRORs in the Pool and to avoid HDDs bigger than 4TB (recommended up to 12*2=24 disks for mission critical applications and 24*2=48 disks for non-mission critical applications in the pool).

Note, the pool performance is increasing with number of MIRRORs in the pool. For mission critical applications and using disks bigger than 4TB or more than 12 groups, it is recommended to use 3-way MIRRORs or RAID-Z2 or RAID-Z3.

3-WAY MIRROR

3 disks per group

2 out of 3 disks are parity disks. Total capacity equals the capacity of 1 disk.

Suitability for mission critical solutions:

3-WAY MIRROR

This redundancy level is not allowed for selected system architecture.

The chances of suffering multiple disk failures increase with number of MIRRORs in the Pool.

The 3-WAY MIRROR accepts up to two disks failures per 3-WAY MIRROR group.
3-WAY MIRRORs can be used for mission critical applications, but it is recommended not to exceed 16 MIRRORs in the Pool and to avoid HDDs bigger than 10TB (recommended up to 16*3=48 disks for mission critical applications in the pool and 24*3=72 disks for non-mission critical applications in the pool).

Note, the pool performance is increasing with number of MIRRORs in the pool. For mission critical applications and using disks bigger than 10TB, it is recommended to use RAID-Z3.

4-WAY MIRROR

4 disks per group

3 disks out of 4 are parity disks. Total capacity equals the capacity of 1 disk.

Suitability for mission critical solutions:

4-WAY MIRROR

The chances of suffering multiple disk failures increase with number of MIRRORs in the Pool.

The 4-WAY MIRROR accepts three disks failure per MIRROR group.
The 4-WAY MIRROR is recommended for METRO Cluster that can be used for mission-critical applications.

It is also recommended not to exceed 24 of 4-WAY MIRROR groups in the Pool because the single group damage results with the whole Pool destruction (recommended up to 24*4=96 disks for mission-critical applications in the pool). HDDs bigger than 16TB should be avoided.

Note, the pool performance is increasing with number of MIRRORs in the pool.

RAID-Z1

3-8 disks in a group

1 disk in a data group may fail, total capacity equals the sum of all disks minus 1.

Suitability for mission critical solutions:

RAID-Z1

This redundancy level is not allowed for selected system architecture.

The chances of suffering multiple disk failures increase with number of disks in the RAID-Z1 group.

The RAID-Z1 accepts single disk failure only per RAID-Z1 group.
The RAID-Z1 can be used for NON-mission critical applications and it is recommended to not exceed 8 disks per group and to avoid HDDs bigger than 4TB.

It is also recommended not to exceed 8 RAID-Z1 groups in the Pool because the single group damage results with the destruction of the whole Pool (recommended up to 8*8=64 disks for non-mission critical applications in the pool).

Note, the pool performance is doubled with 2 * RAID-Z1 with 4 disks each comparing to single RAID-Z1 with 8 disks. For mission critical applications, it is recommended to use RAID-Z2 or RAID-Z3 or 3-way mirrors instead of RAID-Z1.

RAID-Z2

4-24 disks in a group

2 disks in a data group may fail. Total capacity equals the sum of all disks minus 2.

Suitability for mission critical solutions:

RAID-Z2

This redundancy level is not allowed for selected system architecture.

The chances of suffering multiple disk failures increase with number of disks in the RAID-Z2 group.

The RAID-Z2 accepts up to two disks failure per RAID-Z2 group.
The RAID-Z2 can be used for mission critical applications.

It is recommended not to exceed 12 disks per group for mission critical and 24 disks for NON-mission critical applications. It is also recommended to not exceed 16 of RAID-Z2 groups in the Pool because the single group damage results with the destruction of the whole Pool (recommended up to 16*12=192 disks for mission critical applications and 16*24=384 disks for non-mission critical in the pool). HDDs bigger than 16TB should be avoided.

Note, the pool performance is doubled with 2 * RAID-Z2 with 6 disks each comparing to single RAID-Z2 with 12 disks. If 3 disks failure per RAID group is required, it is recommended to use RAID-Z3.

RAID-Z3

4-24 disks in a group

2 disks in a data group may fail. Total capacity equals the sum of all disks minus 2.

Suitability for mission critical solutions:

RAID-Z3

This redundancy level is not allowed for selected system architecture.

The chances of suffering multiple disk failures increase with number of disks in the RAID-Z3 group.

The RAID-Z3 accepts up to three disks failure per RAID-Z3 group.
The RAID-Z3 can be used for mission critical applications.

It is recommended not to exceed 24 disks per group for mission critical and 48 disks for NON-mission critical applications. It is also recommended to not exceed 24 of RAID-Z3 groups in the Pool because the single group damage results with the whole Pool destruction (recommended up to 24*24=576 disks for mission critical applications and 24*48=1152 disks for non-mission critical in the pool). HDDs bigger than 16TB should be avoided.

Note, the pool performance is doubled with 2 * RAID-Z3 with 12 disks each comparing to single RAID-Z3 with 24 disks.

Data disk Parity disk

Learn more about data redundancy in ZFS

The redundancy level sets the number of parity disks in a data group. This number specifies how many disks may fail without losing operation of the data group. Higher parity levels require more calculation from the system, which increases redundancy at the cost of performance.

RAID-Z is a data parity distribution scheme like RAID-5, but uses dynamic stripe width: every block is it's own RAID stripe, regardless of block size, resulting in every RAID-Z write being a full-stripe write. RAID-Z is also faster than traditional RAID-5 because it does not need to perform the usual read-modify-write sequence.

Zpool storage characteristics - what each value means

To understand the presented values please note that storage hardware is using the base 10 system, therefore disks capacity is shown in TB (1012 byte units). Software is using the base 2 system, therefore the storage capacity is shown in TiB (240 byte units). For more information refer to our blog article .

Total storage capacity is the sum of the capacity of all disks before redundancy is applied and converted to units used by operating systems (TB → TiB).

Storage capacity after RAID or disk mirroring is applied is the sum of the capacity of all disks after redundancy is applied and converted to units used by operating systems (TB → TiB). This capacity excludes parity disks.

Usable data storage capacity is a functional storage space for user data after redundancy has been applied and 10% of pool space has been reserved. To ensure the pool is working correctly and efficiently 10% of its capacity must be reserved.

Expected annual zpool reliability parameters
General information

The calculation is based on the zpool configuration and zpool reliability parameters.

Zpool configuration parameters used to make the calculation:

  • Number of data groups
  • Amount of disks in data group
  • Amount of parity disks

Zpool reliability parameters:

  • Mean Time To Recovery (MTTR)
  • Mean Time Between Failures (MTBF) or Annual Failure Rate (AFR)

You can use the default zpool reliability parameters or enter your own based on data provided by the disks vendor. More precise data make the result more accurate.

Zpool reliability parameters description

Mean Time To Recovery (MTTR)

MTTR applies to the average time that takes to recover the pool after a disk failure (including RAID rebuild).

Mean Time Between Failures (MTBF)

MTBF is the predicted elapsed time between disks failures during normal system operations.

Annual Failure Rate (AFR)

Annual Failure Rate is the parameter that represents the same information as MTBF, but expressed as a percentage. It is the probability of a disk failing during a single year. Which value will be available about specific disk depends on a disk vendor.

Change parameters

Mean Time To Recovery

Mean Time To Recovery (MTTR) applies to the average time that takes to recover the pool after a disk failure (including RAID rebuild).

days

Disks reliability

Mean Time Between Failures (MTBF) is the predicted elapsed time between disks failures during normal system operations.

Annual Failure Rate (AFR) is the same value calculated as probability of a disk failing during a single year.

MTBF

hours

AFR

%
Zpool performance rating

Read Performance Rating:

1 10

7.4 x single disk

Write Performance Rating:

1 10

4.9 x single disk

The Performance Rating demonstrates the amount of effective performance when compared to single disk performance. Example: A performance index of 4.6 is equal to the performance of 4.6 single disks without redundancy or striping.

Important: Zpool Performance Rating does not equal overall system performance as it is accelerated significantly by cache and by random I/O into sequential I/O conversion.

However, it may be important with sustained I/O. With sustained I/O, the pool that consists of 2 data groups is about twice faster than the pool with a single data group. Also, disk replacement or scrub speed scales with Zpool Performance Rating.

Performance rating does not take into account write log and read cache disks.

Add non-data groups

Write log disks

A storage area on data disks that temporarily holds synchronous writes until they are written to the zpool. Stored on separate media from the data, typically on a fast device such as a SSD.

GB

Read cache disks:

Used to provide an additional layer of caching between main memory and disk. For read-heavy workload, using cache devices allows much of the work to be served from low latency media.

GB

Spare disks

Spare disks are drives that are kept on active standby for use when a disk drive fails. The function enables spare disks to be used for any data group. It should be disks with a capacity equal or more than disks in data groups.

Add zpool

The maximum number of supported pools in HA Clusters is 3.

The maximum number of supported pools in HA Cluster

The information below is based on the release note to the Open-E JovianDSS.

For Shared and Non-shared storage HA Cluster, the maximum of supported pools in Open-E JovianDSS is there. For more pools, an unexpected long failover time might occur. The failover mechanism procedure is moving pools in sequence. If all pools are active on a single node and failover needs to move all 3 pools, the failover may take longer than 60 seconds, a default iSCSI timeout in Hyper-V Clusters. Under heavy load, a too long time of cluster resources switching may also occur in some environments.

The maximum number of supported pools has been reached

The maximum number of supported pools in HA Clusters is 3. Since three pools already exist in the configuration, you can't clone the selected pool.

System summary

System type

Shared storage HA Cluster

Total disks in data groups

0 disks

Zpool in the system

0 zpools

Total Write log / Read cache disks

0 disks

Total usable data storage capacity

0.00TiB

Total spare disks

0 disks

Usable
capacity
Redundancy
level
Disks in
data groups
Zpool
reliability
Space
efficiency
Read
performance
Write
performance

The fields marked with * are required.

I have read the data protection information.