Az előadás letöltése folymat van. Kérjük, várjon

Az előadás letöltése folymat van. Kérjük, várjon

Benyovszky Balázs Storage Technical Specialist

Hasonló előadás


Az előadások a következő témára: "Benyovszky Balázs Storage Technical Specialist"— Előadás másolata:

1

2 Benyovszky Balázs Storage Technical Specialist
Egy kis NASsolás Benyovszky Balázs Storage Technical Specialist

3 Összefoglaló Home NAS (Digitális Otthon) IBM NAS Fogalamak
Pozícionálás NAS tiering Scale Out NAS

4 Home NAS

5 Felmerülő digitális igények az otthonunkban
Több családtag Több PC Egyre több okos Mobil Tablet Fényképek Régi digitalizált fotók Új digitális kamerák Fényképes Mobil Zenék Régi digitalizált hangfelvételek Új digitálisan vásárolt zene Filmek Régi digitalizált VHS Új vásárolt film Felvett TV-műsor Dokumentumok Leltár, önéletrajz, szerződés, alaprajz, diploma munka

6 A káosz Több különböző adat tárolási forma, különböző helyen
USB drive, gyerek/családi PC, munkahelyi laptop, karácsonyi tablet, mp3 lejátszó, rengeteg telefon..... Több felhasználó Papa, Mama, Gyerekek Nem megoldott az adatok egyidejű használata több személy által Hozzáférhetőségi problémák

7 Home NAS és média tár kiépítése
Otthoni hálózat Wifi router, LAN hálózat Központi tároló Kitüntetett PC, Mikro PC, Router + USB DISK Media szerver, NAS Megfelelő média formátum és minőség DLNA (Digital Living Network Alliance) képes eszközök TV, Erősítő, Digitális képkeret, IP Kamera

8 Digitális otthon

9 IBM NAS

10 SAN vagy NAS? Terhelés-Optimalizált Rendszerek
Storage optimized for low transaction latency Overwrite in-place (block protocol) Do not interpret the data content Ideally, behave like a local disk Storage optimized for shared access Always append instead of overwrite. Use pointers (iNodes) File aware - can be shared among clients with access rights Drawback: performance degrades over time* SAN -attached disks L' C' B L O C K NAS filers F' E' SNAPSHOT F I L E S * NAS performance depends on availability of contiguous empty space (defrag)

11 File-ok növekedése felülmúlja az adatbázisokét – miért?
Source: IDC NAS SAN Some competitors say: "a proof that NAS can replace SAN in every aspect" The reality: File space is often unmanaged "write-once read never".  Discuss NAS lifecycle management, IBM Active Cloud Engine

12 NAS Portfolio 2012 / 2013 SAN disks NAS filers Active Cloud Engine
DS3000, V3000 Storwize V7000 XIV DS8000 RamSan Entry-price Mid-price Enterprise Flash only NAS filers Active Cloud Engine "mini" Active Cloud Engine N3000 N6000 V7000 U N7000 SONAS (GPFS) Gateways Entry-level Midrange Enterprise Ultrascalable

13 Széleskörű alkalmazás Skálázhatóság / Életciklus követés
NAS Pozícionálás Performance per $ Tape és más tárkapacitások bevonása Több mint 100%-os kapacitás növelés FREE 1:2–1:3 RtCA V7000Unified N3000 N6000 N7000 SONAS (GPFS) Gateway Széleskörű alkalmazás Magas szintű alkalmazás integráció, file deduplikáció, Metrocluster,… 30 : 4 Skálázhatóság / Életciklus követés Az elérhető legnagyobb filerendszer teljesítmény a piacon, PB-os méretekben skálázható

14 Tradícionális NAS vs. Scale Out NAS
“Traditional NAS” “Scale Out NAS” File 1 File 2 File 3 ALL FILES Cluster Interface node 1 Interface node 2 Interface node 3 Interface node n file server 1 file server 2 file server 3 Storage Island Single large ‘virtual’ server including automated storage tiering Storage Island Storage Island A few “traditional NAS” challenges: Goal of Scale Out NAS: Scale performance and capacity with number of disks and file servers Adding file servers leads to fragmented data, hot spots and underutilized disks More complex to manage multiple NAS appliances Operational costs grow Scale performance and capacity independently With interface and storage nodes Very high aggregate performance through parallelism Greatly simplified management because it is one system Provides operational cost reduction 14 14 14 14

15 "Unified" Storage Manapság a legtöbb NAS megoldás "unified" vagy "Gateway" típusú Ebben az esetben block típusú hozzáférést (FC, iSCSI) is képes biztosítani az alkalmazások felé client network client network FC iSCSI NFS CIFS NFS CIFS FC iSCSI Emulation PRO: Performance per € CON: More management PRO: Flexibility CON: Performance per € e.g. V7000 Unified e.g. NetApp FAS

16 N series vagy Storwize V7000U ?
Magas szintű alkalmazás integráció Deduplikáció aktív adaton (VMs, userspace) Compliance lock opció (WORM) Telítettség hatással van a performanciára N6000 N7000 V7000U   Versatility Performance per € Tisztán SAN teljesítmény block szintű forgalomra Real-time compression aktív ataton Performancia megtartása mellett több mint 100%-ban növelhető a kapacitás (HSM) Gyors NAS növelhetőség  csekély beavatkozás

17 Cluster File System (GPFS) a SONAS -ban
A cluster file system virtualizes ONE filer across many nodes Cluster FS don't come out of thin air - they mature over time Cluster FS cannot easily be ported onto existing NAS products Clustered nodes behaving like a service cloud (no service is tied to a specific node)

18 Software & Hardware Komponensek
Backup & Migration Protocol Symantec Netbackup Legato Networker CommVaultSampana File Access Protocols CIFS NFS HTTPS FTP SCP NDMP Cluster Manager Clustered Trivial Databade (CTDB) IBM Support Call home Parallel File System & Active Cloud Engine IBM GPFS Monitoring agents Symantec McAfee GUI (web) & CLI Management Interfaces Anti-Virus Scan AV server SONAS Admins Security AD/LDAP/NIS server Remote Cache Panache Backup & Recovery TSM client Remote SONAS TSM server ILM Disaster Recovery Remote Replication Rapid Data Recovery Snapshots HSM HSM agent Remote SONAS TSM server Base Operating System RedHat Enterprise Linux 6 (64bit) Hardware Servers: IBM System x 3650M3 Connectivity: Voltaire Infiniband SMC Ethernet switches IBM/Brocade SAN switches (gateway) Storage: DDN6620 (appliance) XIV, or SVC, or Storwize V7000 (gateway) 18

19 SONAS Felépítés – Scale Out Grid
External Local Area Network (1GbE & 10GbE) No Single Point Of Failure Scale Front-End cache throughput & bandwidth Interface Node 1 Interface Node 2 Interface Node 3 Interface Node 4 Interface Node 5 . . . Interface Node 30 Redundant High Bandwidth Low Latency Infiniband Fabrics Internal Redundant Management Local Area Network (1GbE) Scale Back-End throughput & bandwidth Storage Pod 1 Storage Pod 2 Storage Pod 3 Storage Pod 4 Storage Pod 30 . . . Scale Capacity & IO/s 19

20 SONAS Felépítés – Storage Pool
External Local Area Network (1GbE & 10GbE) Interface Node 1 Interface Node 2 Interface Node 3 Interface Node 4 Interface Node 5 . . . Interface Node 30 Redundant High Bandwidth Low Latency Infiniband Fabrics Internal Redundant Management Local Area Network (1GbE) Storage Pod 1 Storage Pod 2 Storage Pod 3 Storage Pod 4 Storage Pod 30 . . . Storage Pools are created with Volumes taken from homogeneous & dedicated RAID groups from all Storage Pods and Enclosures in the SONAS cluster 20

21 SONAS Scale Out szemlélet – File System
External Local Area Network (1GbE & 10GbE) Interface Node 1 Interface Node 2 Interface Node 3 Interface Node 4 Interface Node 5 . . . Interface Node 30 Redundant High Bandwidth Low Latency Infiniband Fabrics Internal Redundant Management Local Area Network (1GbE) Storage Pod 1 Storage Pod 2 Storage Pod 3 Storage Pod 4 Storage Pod 30 . . . Multiple Storage Pools can be assigned to a single File System for right Tiering as well as performance 21

22 Terminológia A File is made up of metadata and data
4/4/2017 Terminológia A File is made up of metadata and data A Fileset is a logical sub-tree of a file system Provides more granularity, flexibility and manageability for features such as Quota management, Snapshots, Policies, etc. In many respects behaves like an independent file system A File System is created using Network Shared Disk (a LUN) from 1 or more storage pools A Storage pool is a group of NSDs within a file system A Network Shared Disk (NSD) a Logical Unit Number (LUN) assigned to a number of File Modules File attributes include folder name File data can be moved to different storage pools within the file system Up to 256 File Systems per Storwize V7000U Up to 3,000 filesets per File System Up to 1,000,000 files per file system It is recommended to group NSDs of the same characteristics (FC, SAS, SATA, NL-SAS, RPMs, size, … same RAID type and topology) A Storage Pool is more like an attribute like Tier level (Bronze/Silver/Gold) than the type of pool that we have in Storwize V7000 storage module, SVC or XIV. In the Disk subsystem space, a pool is a group of arrays or spindles shared by multiple LUNs. Consider it like a datastore for vmware. With Storwize V7000U File Modules or SONAS, it is a group of NSDs (an NSD is a Volume/LUN for the Storwize V7000 Storage Module and part of a Storage Pool that could be shared with LUNs/Volumes used for Block Access protocol) assigned to a File System. Therefore a pool in that sense is NOT SHARED by multiple File Systems. In other words this notion of pool is closer to tiering attribute given to an NSD to be used after with ACE policies. It is recommended to dedicate hard disk drives spindles (a RAID array) to each NSD 22 IBM Confidential 22

23 Filerendszer logikai felépítése
4/4/2017 Filerendszer logikai felépítése File System Up to 256 File Systems per SONAS cluster Max. 8PB per file system Up to 3,000* filesets per File System Up to 65,536 (216) sub-directories per directory Up to 263 files per file system (4,294,967,296 = 232 currently tested) File attributes include folder name File data can be moved to different storage pools within the file system File Sets SONAS limitations: GPFS limitations: maximum number of files = (total file system space/2) / (inode size + subblock size) With SONAS and Storwize V7000U file modules, block size = 256KB and a sub-block is 1/32 of a block = 8KB Inode size = 512 Bytes Folders (*) Up to 3,000 dependent and 1,000 independent filesets: An independent file set has a separate inode space, but shares physical storage with the remainder of the file system A dependent file set shares the inode space, quota limits and snapshot capability of the containing independent file set 23 IBM Confidential 23

24 Mikor mit használjunk? Gateway: When primary goal is SAN price / performance, and NAS should not create a separate disk infrastructure. ClusterFS: When primary goal is managing significant capacity growth while avoiding systems proliferation Filer: When primary goal is ease of administration, with predictable capacity growth & without latency requirements client network FC iSCSI NFS CIFS client network NFS CIFS FC iSCSI Emulation

25 Storage Tiering NAS -ban
Tapes Flash IBM Active Cloud Engine® IBM EasyTier®

26 Szabályrendszer alapú tárolás
Transparent to users Automatically… Optimize footprint by moving rarely used files to the most dense media tier Optimize performance by placing only hot data onto the fastest storage pool Lower storage costs over time by moving inactive files to tape, and deleting unwanted or expired files Improve administrator productivity through policies for file management Improve data protection with policy-based file backup Files Storage Pools Migrate inactive data to tape, cloud storage or ProtecTIER via TSM Server

27 NAS gátrendszer : Performancia megtartása & költségek csökkentése
85% max. NAS fill grade preserves performance predictable TCO Tapes* or cloud ½ incremental cost 10 MB stale file V7000 Unified or SONAS 4kB stub (*) TSM, in future LTFS

28 Hogyan kezelhető Global Data szabályrendszerekkel/autómatizmussal
Private Cloud Flash Offices Subsidiaries Other DC IBM Active Cloud Engine® Global Place and move files based on policies Offload stale NAS data to tape (HSM) Selective geo-replication Cache data across remote locations Overflow data from one DC to another

29 IBM Active Cloud Engine Global in SONAS 1.3
cached original Policy-based file administration as in Storwize V7000 U, plus... Corporate single name space WAN cloning functionality WAN acceleration Cache/push data across locations 1 writable original + read-only clones* Switch roles in realtime (cloud engine) Enable CIFS over WAN Client benchmark: Windows WAN performance ×20 6 MB/s become 100 MB/s (at 80 msec distance) or 70 MB/s (at 120 msec distance), single thread, file cache miss (!) * restriction to be lifted later

30 Active Cloud Scan shrinks the backup window
Never crawl the file system tree! Metadata is instantly available from GPFS Use for backup, migration, overflow-to-tape, virus scan, etc. msec few objects msec steady growth enumerate files with "last accessed" property > 1 year millions of objects ~sec

31 Köszönöm !


Letölteni ppt "Benyovszky Balázs Storage Technical Specialist"

Hasonló előadás


Google Hirdetések