Monday 1 October 2012

E20-547 Brain Dump



Peer control station connects to eth1
The customer network connects to eth0
Fmp protocol is used to exchange info between the mpfs and the vnx array
DNS service resolution resolves service names instead of computer names. DNS returns a list of machines that run a specific service, such as LDAP and Kerberos
DNS are defined to data mover using the server_DNS command
You must configure more then one DNS server for each data mover when:
·         Not in same windows forest
·         Not served by the same DNS server
UDP is the preferred protocol for DNS
Protocolwill switch to tcp when message is larger than 512 bytes
2 ntp servers recommended server_date to set ntp services
To migrate between domains use server_cifs –migrate, -replace
The CIFs server created on a physical data mover with no specified interfaces becomes the default server. It is associated with all unused interfaces and subsequent interfaces attached.
Server_export command to export shares
Unicode 3.0 standard
Share name limited to 12 characters with Unicode supports 80 characters
Comment length is 256 bytes /256 ASCII
Best practice to enable Unicode at start cannot go back to ASCII changing to Unicode after installation can cause interruption
Quotas can be configured using the CLI of vnx, unisphere or windows server interface
Activate quotas before populating the filesystem when quotas are first activated the whole filesystem is unavailable while the quota initiation process sees the fs
VNXe 3100/3300 – File Iscsi block
VNX 5100 – FC Block only
VNX5300/5500/5700/7500 – File and Block
VG2/VG8 – file only
Control Station runs red hat 5
Ifconfig eth0 (will show eth0 properties and ip address
Quotas can be managed via CLI or unisphere gui or windows hosts
Anit virus is part of the VNX event enabler infrastructure
SAN Copy not part of any suites comes as standard

VNX Protection suites
Total protection
Total efficiency
fastsuite
N
y
Security and compliance suite
N
Y
Local protection suite
Y
Y
Remote Protection suite
Y
Y
Application protection suite
Y
Y


Fast suite
VNX Fast VP (Less inactive data goes to slower drives more active data goes into cache or faster drives)
VNX Fast Cache – up to 2Tb of cache
Unisphere analyser
Unisphere quality of service manager

Security and Compliance suite
Event Enabler – auditing sets retention perioads for files
File level retention – WORM
VNX Host Encryption – PowerPath Encryption/Volume Level Encryption/RSA Keys

Local Protection Suite
VNX Snap View - Block snaps 8 snap sessions max
VNX Snapsure – File snaps 16 writable snaps max 96 read only snaps 112 altogether
Recoverpoint/SE – Continuous Data Protection

Remote Protection Suite
VNX Mirrorview – Mirrorview/s 1:1 relationship Mirrorview/a 4:1 relationships
VNX Replicator – Local Copy, Remote Copy, IP based, file replicator, failover, failback
Recoverpoint/SE
Continuous Remote Replication CRR FC-IP data compression check sum verfified

Application protection suite (manages snaps, clones)
Replication manager
Data protection advisor for replication

Data protection advisor (collects > correlates > analyses > presents) alerts, advice, trending
RAID Type
Min
Max
Data protection
Usage
0
3
16
None striped
Use when speed is the main concern
1
2
2
Mirrored
Provides data integrity (Ideal for OS’, Database log files)
1/0
2
16
Mirrored stripes
Performance and protection is the main concern (Largest amount of disk wastage)
3
5 or 9
5 or 9
Parity on single disk

5
3
16
Parity across all disks
Low cost protection is the main consideration good random read, very good sequential read performance
6
4
16
DP across all disks
Low cost protection is the main consideration good random read, very good sequential read performance

Magic bytes – Shed stamp, write stamp, time stamp, checksum
Disks are formatted at 520 bytes per sector, 512 bytes holds accessible data. 8 bytes are used by the array for background verify > checks data and corrects at the SP.
A shed stamp is used with RAID 3/5 and performs parity shedding
A write stamp is used to detect when writes complete which is less than a full stripe used with RAID 5.
A time stamp is used with RAID 5/3 and is used to detect when an incomplete stripe is written.
Checksum is used in all RAID types and is used to detect but not correct errors in the sector.
RAID 6 parity shedding
When a disk fails writes get written to the parity disk making a hsed position, once in a shed position there is no protection reads will then come from the shed.
RAID 6 is row parity and diagonal parity

2 possible shed positions
Sniff Verify (When LUN is idle runs at increased rate) – low priority check on entire storage system, very low impact on performance
Background verify –
·         high priority check of a LUN or RG
·         can be initiated automatically by the system
·         once started it runs to completion and cannot be stopped
·         starts automatically if during a trespass the OS determines the data could be inconsistent
X-blade standby’s must have identical i/o modules
1 x-blade standby for 3 primary
Alua uses failover mode 4
i/o redirected to other SP over the CMI channel
unisphere tabs (Dashboard > system>storage>hosts>data protection >settings > support)
ssh to control station
NaviCLI to VNX Block
Storage groups are access logix containers for grouping LUNs
RAID groups limited to 16 disks
RAID Groups can be defragmented except for RAID-6
RAID 6 needs an even number of disks
RAID 1/3/6 cannot be extended
EMC recommends 5 disks in RAID 5 and 8 disks in a RAID 6 in storage pools
SP’s have up to 24Gb of RAM at 1330Mhz DDR3 uses Xeon 5600 CPU’s
NFS, PNFS(NFSv4.1)
CIFs, MPFS (Multi path file system)
Object Storage REST, SOAP – Atmos ve
VNX series gateway is file only
AVNX gateway supports up to four back-end arrays
x-blade for file front end
Storage Processor for block front end
Control station used to configure, manage and upgrade the x-blades and manage failover. Each x-blade enclosure has 2 x-blades running VNX OE and you can have up to 8 x-blades.
x-blades can be configured for N + 1 or N+M
DPE (Disk Processor Enclsure)
SPE (Storage Processor Enclosure)
i/o modules for x-blades
4 x 1Gb base T
2 x 1Gb base T + 2 x 1GbE optical
2 x 10GbE optical
2 x 10GbE Twin ax
i/o modules for SP’s
4 x 8Gb FC
4 x 1Gb base T (iSCSI)
2 x 10GbE Optical (iSCSI)
2 x 10GbE Twin ax (iSCSI)
2 x FCoE
VG2 N + 1
VG2 N + 1, N + M
x-blade failover times
minimal – 15/30 seconds
maximal – 1 minute 50 seconds
FTP and NDMP must be restarted manually after failover
Flash EFD, SSD 3.5” 100Gb, 200Gb 3000 i/o’s per second
SAS 3.5” 300Gb, 600Gb (195 drives/Rack) 10k/15k 140 iop’s (10k) 180 iop’s (15k)
SAS 2.5” 300Gb, 600Gb (500 drives/rack) 10k
NL-SAS 3.5” (195 drives/rack)  1Tb, 2Tb, 3Tb 7.2k 90 iop’s (7.2k)
DAE’s config rules
Max number of enclosures per bus 10
Max number of slots per bus 250
Serial attached SCSi back end (SAS)
6Gb/s = 750Mb/s
4 x 750Mb/s = 3Gb/s
i/o ports on SP slot 4 has 4 port FC i/o card connects to x-blade otherwise other 3 ports used for block connectivity
Control Station eth1 port used to connect to a peer control station port. When there are 2 control stations
eth0 connected to primary x-blade config port
eth2 secondary x-blade
eth3 connects to customer/user network for mgmt unisphere is accessed over this interface.
VNX 5100/5300/5500 use DPE’s
VNX 5700/7500 use SPE’s
During install powerpath determines all paths to a device and builds a table deciding how to route i/o. Powerpath reads table and directs i/o to a native or pseudo device.
Power Path Licensing
·         Power Path – unlocks all load balancing and path failover
o   Symetrix optimise
o   Clarion optimise
o   Adaptive optimise for non EMC
·         PowerPath SE
o   Backend (between switch and array) failover only
o   Single HBA supported
o   No load balancing
o   Up to two paths per device
·         PowerPath VE
o   Same as powerpath license but only supports w2k8, hyper-v and vSphere
Powerpath load balancing policies
·         Sym_opt/clar_opt/adaptive (default)
·         RoundRobin – i/o distributed down each path in turn
·         Least i/o – i/o is sent down the path with the least i/o in the queue
·         Least_blocks – i/o is sent down the path with the least blocks in the queue
·         Request – failover only
·         No direct – disable pf and lb default for symetrix and clarion with no license
·         Base failover – fabric failover
Auto probe – tests for dead paths (uses scsi enquiry commands every 30 seconds)
Auto Restore – tests for restored paths (runs every 5 minutes)
Powerpath can work with cluster software
Power path commands
1.       Powermt check
2.       Powermt check_registration
3.       Powermt config
4.       Powermt display
5.       Powermt display dev=all
6.       Powermt display options
7.       Powermt display paths
8.       Powermt load
9.       Powermt remove
10.   Powermt restore
11.   Powermt save
12.   Powermt set mode
13.   Powermt set periodic_autorestore
14.   Powermt set policy
15.   Powermt set write throttle
16.   Powermt set write_throttle_queue
17.   Powermt version
Powerpath uses RSA to encrypt data
Powerpath migration enable – non disruptive data migrations but needs replication software
Basic maintenance (use procedure generator)
USM Activites (Unisphere Serve manager)
·         Registration
·         Generate and view system configs
·         Hardware
o   Installation (DAE, disk, i/o module, sfp)
o   Replacement (helps replace failed hardware)
·         Software
o   System software (helps selects appropriate software to install)
o   Powerlink downloads (disk firmware)
o   Disk firmware
·         Diagnostics
o   Verify storage system (health checks produces reports)
USM can manage VNX, Clariion, celera running DART 5.6
F some components have failed
T – storage system or component is in transition state
U – system is unmanaged
X – storage system is unaccessible
? – unsupported
Enclosure LED
Green (15 drive bay) AC or DC applied outputs in range
Blue (25 bay enclosure) AC or DC applied outputs in range
Off all outputs out of range
Enclosure fault LED
Off all outputs out of range
Amber any enclosure fault, including SP faults
Array is up and running
Power LED on SP is green
Fault LED on SP is off
DPE status is blue or green
First 4 drives are green
USM-tools to check firmware
Identify failed hardware with unisphere
·         Enter ip in browser window
·         Select storage system from dashboard
·         Select system > storage hardware  or hardware for file
·         Check for failed components
Verify storage system
·         Initial system check
·         Capturing config data
·         Generating issues report
Find failed disk using unisphere > click replace disk (launches USM)
SP events log also shows failed disk
DAE-15 to a DPE-25 or a DAE-25 to DPE15 is allowed
DAE6s 3u 15 drive
DAE5s 2u 25 drive
The maximum amount of enclosures is calculated based on the max number of drives
Create pools > advanced
Pool alerts: percent full threshold
Fast cache: enable disable
RG > advanced
Auto delete after last lun deleted
Expand and defrag priority H,m,L
Allow power savings
Thin luns cannot be used in the reserve lun pool, the write intent log or clone private luns
Thick luns cannot be used with write intent log or clone private luns. Can be used in the reserved lun pool.
Thin and thick luns cannot be used with metaluns
Updating the host information
Right click host > properties > storage > update
Only available when the host agent is installed or it is an ESX/ESXi host
Info includes – SCSI devices, physical device address, FS on device, storage system the device belongs to

VNX protection suite – snapview, snapsure, recoverpoint/SE CDP
Clones track changes of source lun in the fracture log
Clones of snaps can be managed by the unisphere gui, navisec cli, admsnap
Snapviw copies use approximately 20% of the size of the source lun
When a clone is fractured the clone tracks the data changes to the source lun via the fracture logs
Snapshot session can be up to 64 characters long
Snapshot chuncks are set to 64k (128) blocks
Snapshot pre-reqs
·         Source luns must be bound
·         Assign the snaps to a storage group
·         Enable data access control
·         Reserved luns must be bound
Best praxtice to have clone luns and source luns on separate disks
Reserved lun recommendations
Total size of the lun/numberof source luns
Reserved lun size = 10% average source lun size
Create twice a smany reserved luns as source luns
Example 10Gb,20Gb, 30Gb , 100Gb
160\4 = 40Gb
Make each reserve lun 4Gb in size (40Gb x 10%)
Make 8 reserved luns (twice as many as source luns)
Snapshot sessions are persistant
Persistant – will survive failures and trespasses (SP reboot or failure storage system failure or reboot
Consistant –
·         preserves point in time across luns
·         available on per session basis
·         counts as one of eight sessions per lun
snapshot roll back
source lun available for reads and writes
copy happens in back ground
user should flush host buffers using admsnap
source device should be offline
snap taken off ine
VNX SAN copy – source lun can remain online during copy
A clone lun needs to be exactly the same size as source lun
When removing clones it cannot be in an active synchronous or revers sync process
Following can’t be cloned
·         hot spare luns
·         clone luns
·         snapshot luns
·         private luns
clone sync rules
·         sync from source to clone lun
·         clone private luns used for incremental sync
·         fracture logs saved persistently on disk
·         two luns of at least 1Gb each
·         host access
·         source access i/o at all times/ clone no i/o during sync
clone reverse sync rules
·         source must be taken off line prior to reverse sync
·         during re-sync all writes to clone then sync’d to source
clone reverse sync – protected restore
·         protected
o   host > source writes not mirrored to clone
o   configure via individual clone property
·         non-protected
o   host > source writes mirrored to clone
o   records source writes in the clone private lun for subsequent syncs
for protected check the use the protected restore in the add clone dialog box
“instant restore” 8 clones per source lun
Consistant fracture when you fracture more than one clone at the same time in order to preserve the point in time restable copy. Across the set of clones
Consistant operations from unisphere or NavisecCLI
You cannot perform a consistant fracture between different storage systems
If there was a failure on any of the clones, the consistant fracture fails on all clones. If any clone within the group was fractured prior to the failure, the software re-syncs that clone.
Clone consistant failure
·         source lun for each clone has to be unique
o   cannot fracture multiple clones that belong to the same source
o   cannot consistently fracture if one clone is already fractured
·         clone appear as administravely fractured (one fractured)
failures affect all clones in group
mirrorview /s and mirroview/a cannot mirror a clone
snapsure
·         snapshot point in time view of data
·         savVol stores original data blocks to preserve the point in time view and holds modified blocks in case a writable snap is used
·         bitmap – identifies changed data blocks in the production file system
·         blockmap – records the location of the data blocks in the savVol
·         baseline snap – read-only snap from which a writable snap can be created
a new block map is crerated for each snap and the bitmap gets cleared

bitmap and block map saved to disk but paged in data mover memory up to 1Gb of RAM is allocated for snapsure
a bitmap consumes 1 bit for every 8k blocks in the pfs
a blockmap consumes 8 bytes for every 8k block in the snapshots saVVol

1Mib sys memory for every Gib in SavVol

For systems with less than 4Gb 512Mb is allocated to data movers for blockmap storage
Server_systat –blockmap shows dm mem consumed by blockmaps
SavVol sizes usually 20% of the size of the PFS
Manual savVol creation
·         manually create VNX meta volume
·         specify that metavolume when creating the snap
·         volume sizing limits apply
a full savVol invalidates all snaps using it i.e all snaps become inactive.
A savVol autogrow extends in 20Gb increments
Extension triggered by HWM (default HWM 90%)
savVol will not exceed 20% of the total NAS storage (this can be changed in the parameter file /nas/sys/nas_param
if not enough disk space will overwrite the oldest snapshot
to disable savVol auto extend set HWM to 0%
snapsure will overwrite the oldest snap
auditing and extending are important to prevent unwanted data loss
snapsure limitations
·         max 96 snaps per pfs + 16 writable snaps total 112
·         do not schedule snaps on the hour as this will conflict with the VNX OE for file’s database backup
FSCK – sanity check the PFS, without taking t offline, by running fsck against a writable snap
Writable snaps
·         can be mounted and exported as read only
·         share savVol with read only snaps
·         add write capabilities to both local and remote
writable snaps must be deleted before the baseline snap
writable snaps cannot be refreshed (snap schedule)
baseline read only snap can have 1 writable snap no snap from a writable snap
per PFS
·         16 writable snaps
·         96 read only snaps
·         Total 112 snaps per pfs 96 + 16
Scheduled refresh on read only snap will fail if writable snap exists
Warning displayed when a writable snap created on a scheduled r/o snap

Total number of fs per cabinet 4096
Max number of mounted fs per data mover 2048
Writable nas limits
Not supported with:
·         VNX replicator
·         File system extension with writable snap
·         VTLU’s?
·         Timefinder f/s
Writable snaps cannot be accessed through CVFS (Checkpoint virtual filesystem) gives users read-only access to snaps. Saves admins from having to run backups for small restore requests.
Name of hidden directory *.ckpt
Name customisation via the param file
CVFS names can only be changed when remounting the snap via CLI only
Shadow copy clients via the Microsoft volume shadow copy server
Nas_ckpt_schedule
Refreshing snapshots
When you refresh a snapshot, VNX snapsure deletes the snapshot and creates a new one recycling savVol space and maintains fs name, ID and mount size.
Scheduled snaps that are baselines of writable snaps will not  refresh until the writable snap is deleted.
Checkpoint refresh of a writable snap is not supported
Refreshing a snap requires freezing the snap. If an NFS client tries to access the snap during a refresh, the system continuously tries to connect indefinitely when the snap thaws the snap is re-mounted. If a CIFs client attempts to access a snap during a refresh depending on the app, or if the system freezes for more than 45 seconds, the app may drop the link and the share may need to be re-mapped.
PFS needs to be paused to delete snaps in order by doing this it could have an impact on performance.
When you delete a snap that is not the oldest snapsure compacts the snap and merges the block map entries to the next oldest snap before the delete completes.
RecoverPoint
CDP (Continuous Data Protection)
CRR (Continuous Remote Replication)
CLR (Concurrent local & Remote Replication CRR & CDP for same volume)
Recover Point Appliance
Use
Min Size
Max Size
Repository Volume
3Gb
3Gb
Journal Pools
5Gb
10Tb
Replication Volumes
X
2Tb(minus 512)
 
One journal for each consistency group is needed. They hold snaps of data to be replicated. Usually 20% of the size of the data replicated.
Host Based Splitter (kdriver)
Installed on the host above the scsi multi pathing driver
Kutils is installed with the driver
        I.            App write i/o
      II.            f/s
    III.            vol mgr
    IV.            kdriver splits writes
      V.            powerpath
    VI.            scsi driver
  VII.            hba’s
when splitting is performed by the switch a standalone version of the kutils must be installed on the host
array based splitter
·         requires recoverpoint/SE enabler
·         lower cost
·         increased scale
·         supports mixed splitter environment
·         RPA storage group needed
·         RPA needs to be registered with the array
·         Array and RPA ports zoned with each other
·         Splitter runs in the SP’s
Multi cluster array based splitter
·         Supports up to 4 recover point/SE clusters
·         LUN is dedicated to single recoverpoint/SE cluster
·         Max of 2048 LUNs across all clusters
·         Recoverpoint/SE 3.3 or higher
·         Array splitter 3.3 or higher
CDP uses
Write is split to RPA when received acknowledge sent. This ack is received by host splitter and held until ack is received from host when both acks are received the ack is sent to the host. When RPA has acknowledged write it moves data into the local journal.
A consistency group consists of one or more replication sets
Distributed consistency groups span four sets of RPA’s
5Gb for regular consistency groups
20Gb for distributed consistency groups (4 x regular consistency groups)
DCG’s support gen 3 or 4 max of 8 distributed consistency groups per cluster.
CG’s – Replicating
        I.            The CG is enabled
      II.            Splitter is replicating to the RPA’s
    III.            RPA’s are replicating to remote journal
    IV.            If image is access is disabled (default state)
      V.            The journal also distributes snapshots to the replica storage
Marking
        I.            The CG is enabled
      II.            Splitter replicating to RPA’s
    III.            RPA’s unable to replicate to journal
    IV.            When link is up will carry on syncing
Disabled
·         Splitter does not split writes to RPA’s is caused by CG being set to diable or by a disaster
Data marking and Transfer
·         Full sweep – when group is enabled for 1st time
·         Volume sweep – sub of a  full sweep
·         Marking mode – when new rep pair is added to a CG
Fisrt initialization
Initialization
Marking mode
·         Short resync
o   Delta between source and target is small
o   Caused by high load, WAN outage
·         Long resync
o   Delta too long for journal
o   Snap segmented and transferred
o   Segment distributed before next segment transferred
Image access enable mode (logged access)
Allow_long_resync
System pane, traffic pane, nav pane, component pane
Configure luns as read only ( used by service providers)
To create a read only lun
Storagegroup –addhlu
Can only be done in the CLI
NavisecCLI storagegroup –list shows LUNs that are read writable or read only
Storagegroup –list –readonly
To make the lun read writable remove from storage group
Storagegroup –removehlu
Add lun back in using –addhlu without read only
AVM best practices
Slice volumes by default (slice box enabled by default)
Thin box enabled by default
Pool striping
First step divide luns into thin and thick groups it will then try and stripe 5 dvols together of the same size, same data services, and in an SP balanced manner. If 5 can’t be found AVM will try 4, 3 then 2 dvols to stripe. If thick luns/dvols can’t be found it will use thin luns/dvols
Pool concatenation
If AVM cannot find enough thin or thick luns/dvols it will then try and concentanate enough thick luns to meet the size requirement if not enough thick luns then it will try and concenate enough thin luns and if that fails it will concenate thin and thick luns/dvols and if that fails it fails
Course notes
Best practice for spares is 1 every 30 disks
Exam tip read all white papers on block and file
FE errors ultraflex i/o modules
M errors
Be errors
Access logics handles initiator records
Linux supports 16 initiator ports (exam question)
Windows supports 32 initiator ports
HLU – Host lun id
LALU – Array lun id
Vault disks first 4 (flare code, firmware, DART)
7500 – SPE storage processor Enclosure
DPE Disk Processor Enclosure
RAID 3 modified on vault drives helps for large sequential reads/writes
VNX is active/passive
(Exam question) whats the command to give user root access idroot_user
Nas/bin/addrootidtouser root_user
(Exam question) looking at alerts it uses SSL for event auditing and alerts
Kerberos time authenticaton – 5 mins
(Exam Question) nas/bin/celerra is where the logs are kept
(Exam Question) x.509 certificates on VNX
Certificate assigned to SP ip’s change sp IP need new certificate
Max 16 disks in a raid group
RAID 1/0 bus balancing
RAID 1 is used to configure FAST CACHE
RAID 3 bit parity
RAID 4,5,6 block parity + dispersed
Read cache
Write cahce
System cache
Applications will read sequentially if written well for performance
Max 255-256 luns on raid groups
RAID group best practice
Use 70% of total IOPs
1000 IOPs needed
200 IOPs per disk
Use 7 disks
7 x 200 = 1400 IOPs
First most available space will choose vault drives when creating luns using the gui
(Exam Question) default for creating luns is storage pools not RAID groups
Storage group –list –readonly will show read only luns
Max disks in a pool are all attached minus the 4 vault disks
Elab navigator shows compatibility matrix
Data written in 8 byte chunks
·         2 x write stamp
·         2 x time stamp
·         2 x parity info
·         2 x CRC
Up to 2Tb of FAST CACHE on 7500
Control station gets connected to slot0
Block LDAP master pushes info into slaves
File LDAP has to be configured on each slave
For file logging into control station you need to be a nasadmin to run config commands
Write cache
Forced flush above 90%
Idle flushing below the low watermark default 70%
Using water makr algorithm
(Exam Question) NavisecCLI –h x.x.x.x port –list –hba | more
(Exam Question) scli sansurfer determines HBA’s
Linux lun rescanning needs i/o quiescing
Linux 2.4 kernel is disruptive best to reboot
Linux 2.6 kernel has dynamic lun scanning
Most effective way to find new luns in linux is to perform a reboot
Cat /proc/scsi/scsi shows hba/path/lun id’s
Use the above to verify luns
(Exam Question) where to verify luns /proc/scsi
Iscsiadm –m node to show target node
Verify luns with powerpath
Powermt display dev=all
ALUA – ascymetric lun access
Powerpath failover needs array failover mode 4 for alua to work
Update al hosts to verify luns in unisphere
(Exam Question) what can oversubscribe CPU cycles? SMP can do this
VSI-emc API’s (distributed hierarchical storage manager)
DSHM needs to be configured for cloning, dedupe to work
(Exam Question) certificate can not be MD5 signed to by pass certificate checking
Each run time luns
Which is the lun id or does not change in run time name=first 9 naa.xxxxxxxxxxx
Use server utility to check high availability verification (HAVT)
Or
System reports > high availability reports
(Exam question) when using shared storage each esx erver needs the same lun id for tasks like vmotion or strage vmotion
(Exam Question) what are some file and block features/extensions
Storage –dedupe
File – compression
Metaluns
Element size multiplier size = 4
Upto 16 metaluns striped if you want more i/o takes time. Concenate if space is needed quickly
Lun migration limited to 44mbps
Uaed to change RAID type of lun to move so that you have lun with more space
When doing NDU with metaluns it can delete data
In storage pools using bitmap to locate where luns data blocks are
Compression
Needs to de-compress to read or write the dat
(Exam Question) compress at 64k chunks
(Exam Question) FAST VP moves data in 1Gb chunks
Sparse luns support will keep polling for new luns if lun id’s do not match up
If data is hit 3 times it will stay in the cache you can have up to 2Tb of cache configured using FAST CACHE and RAID 1 configured EFD’s
Lun shrink on w28 uses DISKRAID.exe
File
Uxfs – is the filesystem which will hold CIFs and NFS shares
DART – Digital Access Real Time
Uxfs holds uids and guids and sids
NTP is needed for Kerberos authentication
For datmover to talk externally they need a default gateway configured
To telnet to control station you need to be nasadmin
FSN – fail safe network
Server_sysconfig – command to list pci devices
Server_ifconfig – command to show IP address info
Ipv6 only for mgmt puposes configure ivp6 at cs properties or CLI
Default gateway needs to be configured before CIFs server is added to the domain
CLI command to configure DNS
Server_dns server –o stop
Default protocaol for DNS is udp (because its faster)
NTP Status – server_date server time svc stats ntp
Server_devconfig used to configure devices
Luns chopped up into disk volumes dvols
File system (upto 16Tb) > meta volume (at least 2mb) > slice volume (needed for snapsure) > stripe volume > disk volume > luns > raid groups
Can only extend file system pools with system defined pools not user defined pools
(Exam question) two pool types for file are – system defined/user defined
Two types of pools for block are – virtual pools/raid groups
Can have raw file systems that you can provision straight to the host
Uxfs is like wafl
DART islike ontap
Auto extend buffer size is 3% below the HWM
Nested file system can export a group of single file systems as one
Use vsi to attach nfs storage
Usermapper only for CIfs to map UIDs and GIDs
Sec map holds UIDs and GIDs
(exam Question) you should not touch starting point for UIDs and GIDs
(Exam Question) what does the value start at 32768
One primary user mapper
Virtual data movers only for CIFs
Need to configure CIFs on each data mover
See if CIFs has joined domain CIFs server porperties, background taks s for file in system
Global Share don’t choose DM
Local share choose DM
Supports SMB2
Overview
·         Uxfs – native filesystem for NFS, CIFs
·         Usermapper – 3 roles primary, secondary, client
·         Storage pool profiles for file – system defined/user defined
·         Two storage profiles – volume/storage
·         Sliced volumes – used for snapsure
·         DART – Data Access Real Time
Virtual Data Movers are for CIFs only
Root file system for data mover – 256Mb
Root file system for virtual data mover – 128Mbvirtual data movers need to be read only in order to be migrated
Normal user quotas
Tree quotas
Quotas configured on the parents
Soft limit quotas will warn hard quota limit will deny writes if gone beyond quota allocation
Tree quota overrides normal user quota
8191 max tree quotas per file system can’t have tree quota in a tree quota i.e cannot be nested
(Exam Question) by default on “file size” policy quotas need when you have a new directory is set in the file system path
Home directory supports CIFs use NT security only
Domain:username:/path
CIFs home directory structure as above
In etc folder you can add a wild card so that you do not have to input each user individually
Domain:*:/path
Map home drive command
Net user esele/domain/homedir:llceldmz/home
Recommended to use CAVA sizing tool to determine how many CAVA servers
Exclude fles are configured on viruschecker.conf
CAVA only available for CIfs
Filtering done CIFs share level
Filter files stored in \.filefilter
Upto 8 devices in an etherchannel
2,8 up to 12 in a link aggregation LACP
(Exam Question) what is the definition of VLAN trunks 802.1q
Cge=copper gigabit Ethernet
Fge=fibre gigabit Ethernet
Fxg=optical 10 gigabit Ethernet
Network failures will not cause data movers to failover to the standby DM
Cross stack etherchannell is between two seoerate switches
2 types of monitoring centralised and distributed
Monitoring host cannot be connected to storage or be in any storage groups
VNX alerts
·         Warning
·         Critical
·         Informational
·         Error
VNX alert Responses
·         Email
·         SNMP, Syslog
·         Custom Application
·         Paging, SMS

30,000 hex code errors and alerts
Unisphere host agent needs to be installed on the monitoring host
Use server utility to check if esx has high availability
(Exam Question) Fault icons
·         F – Fault
·         X – inaccessible
·         U – unmanaged
·         T – Changing State
·         ? – Unsupported
2 weeks retention for file statistics up to 26 weeks
Audit events in etc/audit/auditruler
Main record types
·         Syscall
·         PATH
·         CWD
·         USER_XXX
·         FS_WATCH
Each control station sync’d every 180 seconds
(Exam Question) where are the audit logs located? /celera/audit
User events from MMC windows event logs
Snapview for block
Up to 8 snapshot sessions per LUN
4 snapshot sessions per RPL when over 4 sessions it will need to use the next RPL
Need protection enabler for snapshots
Admsnap to schedule snaps
Before starting a snap you will need to sync servers memory buffers using the admsnap utility
Overview
·         LACP – 6 active links
·         CAVA – Anti Virus
·         Snapshots – COFW 8 active
·         VDM’s
·         Quota’s – normal quotas/tree quotas
·         CIFs auditing enable through MMC snap in
·         VNX auditing enabled by default
·         Network Failover, FSN, LACP, Etherchannel, Cross-Stack etherchannel
·         Monitoring – centralised/Distributed
·         File extension monitoring – noext
Cloning both luns need to be the same raid type the same type of disk or you will get performance hits
All volumes/File System’s use savVol to save snapshots
96 read onlysnapshots per file system
16 writable snapshots per file system
Snapshots will have a block map
Deleting a snapshot requires you to delete the writable snap first then the read only snaps
Bit map will get deleted along with the snapshot
A new bit map is created for each new snapshot
You cannot snap a writable snapshot
CVFS naming convention
Yyyy_mm_dd_hh_mm_ss
Snaps cannot be scheduled at beginning of the hour as this conflicts with the VNX OE database backup

      V.