Storage Devices, Local File System and Crossbar Network File System  Characteristics, and 1 Terabyte
5 pages
English

Storage Devices, Local File System and Crossbar Network File System Characteristics, and 1 Terabyte

-

Le téléchargement nécessite un accès à la bibliothèque YouScribe
Tout savoir sur nos offres
5 pages
English
Le téléchargement nécessite un accès à la bibliothèque YouScribe
Tout savoir sur nos offres

Description

Storage Devices, Local File System and Crossbar Network File System Characteristics, and 1 Terabyte File IO Benchmark *on the “Numerical Simulator III” Naoyuki FUJITA Hirofumi OOKAWA fujita@nal.go.jp ookawa@nal.go.jp National Aerospace Laboratory of Japan * recognized that the RAID device on CeMSS is optimized Abstract at two (2) Megabytes IO size, the LTO drive on CeMSS is optimized at 128+ Kilobytes IO size, the local file system We benchmarked a mass storage system named performs at about three to four (3 to 4) Gigabytes per “CeMSS” on the “Numerical Simulator III” System. It second with eighty-way disks, and the crossbar network has eighty (80) RAID-5 disk arrays and forty (40) LTO file system performs at about one point six (1.6) Gigabytes tape drives, as a storage devices, and has an HSM based per second with eighty-way disks. Set to the above local file system and crossbar network file system. We conditions CeMSS can write/read a one (1) Terabyte file also described CeMSS design outline. In order to clear within 10 minuets. our benchmark perspective we defined “Standard IO Characteristic” and “User IO Pattern”. We recognized that the disk and tape devices of CeMSS are optimized at 2. “Numerical Simulator III” Mass Storage 2[MB] and 128+[KB] IO size. Using 16-way disks, user System application programs can use over 1[GB/s] IO throughput on the NS-III. And under 80-way disk At the National Aerospace Laboratory of Japan condition, ...

Informations

Publié par
Nombre de lectures 15
Langue English

Extrait

Storage Devices, Local File System and Crossbar Network File System
Characteristics, and 1 Terabyte File IO Benchmark
*on the “Numerical Simulator III”


Naoyuki FUJITA Hirofumi OOKAWA
fujita@nal.go.jp ookawa@nal.go.jp
National Aerospace Laboratory of Japan


* recognized that the RAID device on CeMSS is optimized Abstract
at two (2) Megabytes IO size, the LTO drive on CeMSS is
optimized at 128+ Kilobytes IO size, the local file system We benchmarked a mass storage system named
performs at about three to four (3 to 4) Gigabytes per “CeMSS” on the “Numerical Simulator III” System. It
second with eighty-way disks, and the crossbar network has eighty (80) RAID-5 disk arrays and forty (40) LTO
file system performs at about one point six (1.6) Gigabytes tape drives, as a storage devices, and has an HSM based
per second with eighty-way disks. Set to the above local file system and crossbar network file system. We
conditions CeMSS can write/read a one (1) Terabyte file also described CeMSS design outline. In order to clear
within 10 minuets. our benchmark perspective we defined “Standard IO
Characteristic” and “User IO Pattern”. We recognized
that the disk and tape devices of CeMSS are optimized at 2. “Numerical Simulator III” Mass Storage
2[MB] and 128+[KB] IO size. Using 16-way disks, user System
application programs can use over 1[GB/s] IO
throughput on the NS-III. And under 80-way disk At the National Aerospace Laboratory of Japan
condition, CeMSS could operate a 1[TB] file within 10 (NAL), we introduced the Numerical Simulator I System
minutes. which utilized the compute server VP400 and promoted
the Navier-Storks equation based on numerical
simulations in the 1980s. In the 1990s, we built the
1. Introduction Numerical Simulator II system which utilized the compute
server Numerical Wind Tunnel and promoted parametric
In 2002, the National Aerospace Laboratory of Japan study calculations of complete aircraft aerodynamic
simulations. Then in 2002, we introduce the Numerical introduced the Numerical Simulator III System. The
Numerical Simulator III System has a mass storage system Simulator III System (NS-III). NS-III has nine (9)
named CeMSS (Central Mass Storage System), which has [TFLOPS] compute servers (CeNSS: Central Numerical
eighty (80) RAID-5 disk arrays and forty (40) LTO tape Simulation System), a 3D-visualization server (CeViS:
Central Visualization System), and a high-speed mass drives. CeMSS is connected with CeNSS (Central
Numerical Simulation System) via a four (4) Gigabyte bi- storage system (CeMSS: Central Mass Storage System).
directional crossbar network. In the field of We are going to do multidisciplinary numerical
computational fluid dynamics, huge scale numerical simulations, unsteady flow analysis, and so forth, on the
NS-III. Figure 1 shows an overview of NS-III. simulations -- e.g.: simulations on combustion flow with
chemical reactions -- are possible recently. However, According to our estimation of requirements on CeMSS
these simulations require high-speed file IO systems that [1], CeMSS has to have about one (1) Gigabyte per second
can operate files at a rate of about one (1) Gigabyte per throughput, while single storage devices have several
Megabytes per second throughput. So CeMSS should be a second.
We realized the throughput requirements of CeMSS parallel IO system. When we make a parallel IO system
with the methodology listed below. In order to inspect the on multi-node parallel computers (where “node” is a
throughput and understand CeMSS’s characteristics in computing node and CeNSS is a multi-node parallel
computer), we can consider two typical IO models; one is detail we measured the raw device IO characteristics, local
file system IO characteristics, and crossbar network file the nodes-wide-parallel-IO model while the other is the
system IO characteristics of the Numerical Simulator III IO-node model (Figure 2). By using the
System. Furthermore, we benchmarked a one (1) Terabyte
file IO. From measuring the characteristics, we CeMSS:
Central Mass Storage SystemCeNSS:
Central Numerical Simulation System Storage Devices:
9TFLOPS,3TBmemory [Disk : 50TB]
RAID5:9D+1P(FujitsuPW-D500B1)
80[drives]
[Tape : 600TB]
LTO(IBM3580)
40[drives]
Local File System: HSM
CeViS: Central Visualization System
6Pipes Crossbar Network File System

Figure 1. Numerical SimulatorIII Mass Storage System Overview

Table 1. CeMSS’s Outline Design Items
Inter-Connection Inter-Connection Items Selected design
Individual storage system or
Local file system IOCompute Compute Compute Local file system node node node node node node *1node HSM or not HSM
Disk RAID5
Reliability/Redundancy (Original + Copy) tape
Tape Disk Disk Disk Disk Disk Disk media
*2Cooperation with 3D- GSN + Original
visualization system library
(a) Nodes-wide-parallel-IO model (b) IO-node model *3Cooperation with workstation NFS
*1 *2 Hierarchical Storage Management Gigabyte System Figure 2. Two typical parallel IO models on *3 Network Network File System multi-node parallel computer


nodes-wide-parallel-IO model, we can easily build a WorkstationCeNSS CeViS Measuring Point 3
parallel IO infrastructure because each node has its own
Compute Compute 3D Measuring Point 4
node node visualizationIO port, so that we do not have to prepare special system Measuring Point 5..
resources for building a parallel IO. However, the nodes- User Program User Program User Program User Program
OriginalX-bar NFS X-bar NFS NFSwide-parallel-IO may cause a collision between IO Library
Measuring Point 2operations and computing operations. Namely, whenever
Measuring Point 1Crossbar interconnect network GSN LANone node needs data that is stored in its neighbor's storage, 4[GB/s] x 2 800[MB/s] (Ethernet)
the neighboring node is obliged to do the IO operations
while doing computing operations. This situation makes it
Original
NFSLibrarydifficult for us to estimate IO operation times as well as User Program
X-bar NFS User Program
computing operation times. On the other hand, the IO- Local File System: HSM
node model will give us more steady and higher IO Operating System
performance although an extra IO node is required. We IO node
adopted the IO-node model on NS-III.
As mentioned above, we chose the IO-node model for Tape
Disk x 80 x 40LTOCeMSS. There are further items that should be considered FC-RAID5
CeMSSin order to build an efficient storage system. Table 1
summarizes these items. Figure 3 shows the resultant
Figure 3. System Design of CeMSS system design of CeMSS. 3.2. “User IO Pattern” 3. Mass Storage Benchmark Perspective
We assumed that a user writes and/or reads 2[MB] data Estimating IO performance on CeMSS with an HSM as
per one user IO operation and the total number of bytes a local file system we must take some steps. The detailed
per one file is 2[GB]. That is to say, one file is created by estimation steps are as follows;
about one thousand write-operations. Of course, user IO 1. Estimate storage device characteristics
patterns are tightly related to each application, so this 1.1. Disk device characteristics
pattern is just an example. 1.2. Tape device characteristics
2. Estimate basic file system performance
2.1. One file disk IO performance 3.3. Parameters
2.2. One file migrating performance
2.3. One file staging performance As we defined the “Standard Characteristic” and “User
3. Estimate file system performance of actual IO patterns”, we can define parameters for measuring
condition characteristics and benchmarking. Here are some
3.1. Multiple files disk IO performance parameters; File Size, User IO Size, Raw IO Size, Number
3.2. Multiple files migrating performance of Device Parallelization. We measure the characteristics
3.3. Multiple files staging performance via IO sizes and number of parallelizations in this paper.
NS-III has measuring points for IO throughput (See
Figure 3; Measuring Point (MP) 1-5). In this paper we 4. Characteristics and 1 Terabyte File IO
report results at three measuring points. The following are Benchmark
the selected measuring points;
MP 1:Raw device IO characteristics on CeMSS
MP 2:Local file system IO characteristics on CeMSS
4.1. Raw Device IO Characteristics on CeMSS MP 3:Crossbar network file system IO on CeNSS
Before starting the discussion, we would like to define
On CeMSS, the disk device is a RAID disk array unit. some terms for measurement.
So seizing disk device characteristics, we measured IO Disk Device --- One RAID disk array unit in this paper.
throughput corresponding to raw IO size. Throughput was One RAID has 9 data disks and 1 parity disk
calculated from the time stamp just before/after an IO Raw IO Size --- Number of bytes of an IO unit read
operation and file size. And with this measurement, the from or written to a device, which is assigned to disk
IO operation is “write” and/or “read” low-level file IO device in advance
function call, and the IO operation was performed on a User IO Size --- Total IO number of bytes per one IO
raw device directly. Figure 4 shows some of the results. user operation
File Size --- Total number of bytes in a file
4.2. Local File System IO Characteristics o

  • Univers Univers
  • Ebooks Ebooks
  • Livres audio Livres audio
  • Presse Presse
  • Podcasts Podcasts
  • BD BD
  • Documents Documents