BitLocker

Check BitLocker’s Status

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
C:\Windows\System32>manage-bde -status
BitLocker Drive Encryption: Configuration Tool version 10.0.22621
Copyright (C) 2013 Microsoft Corporation. All rights reserved.

Disk volumes that can be protected with
BitLocker Drive Encryption:
Volume C: [WIN11]
[OS Volume]

Size: 953.00 GB
BitLocker Version: 2.0
Conversion Status: Used Space Only Encrypted
Percentage Encrypted: 100.0%
Encryption Method: XTS-AES 128
Protection Status: Protection Off
Lock Status: Unlocked
Identification Field: Unknown
Key Protectors: None Found

Volume S: [YOGA]
[Data Volume]

Size: 953.55 GB
BitLocker Version: 2.0
Conversion Status: Encryption in Progress
Percentage Encrypted: 38.7%
Encryption Method: AES 128
Protection Status: Protection Off
Lock Status: Unlocked
Identification Field: Unknown
Automatic Unlock: Disabled
Key Protectors:
Password
Numerical Password

Getting REST API of Redmine

Enable REST Web Service

  1. Redmine -> Administration -> Settings -> API -> Enable REST web service

Get user API access key

  1. My account -> API access key -> Show

User management - VMware ESXi

Login to the ESXi Host

Create a new user

1
[root@myvmesxi02:~] esxicli system account add --id=chengman --password=mypassword --description="ESXI admin"

Assign administrator role

1
[root@myvmesxi02:~] esxcli system permission set --id=chengman --role=Admin

Verify the user and permissions

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@myvmesxi02:~] esxcli system account list
User ID Description
-------- -----------
root Administrator
dcui DCUI User
vpxuser VMware Workstation administration account
chengman ESXI Admin

[root@myvmesxi02:~] esxcli system permission list
Principal Is Group Role Role Description
--------- -------- ----- ----------------
cradmin1 false Admin Full access rights
cradmin2 false Admin Full access rights
chengman false Admin Full access rights
dcui false Admin Full access rights
root false Admin Full access rights
vpxuser false Admin Full access rights

NOI commands

Check Software Components Version

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
$NCHOME/omnibus/bin/nco_id -v
# /san/tivoli/netcool/bin/nco_id -v
Netcool/OMNIbus 8.1.0 - September 2020
NCHOME: /san/tivoli/netcool
IMHOME: /san/tivoli/NetcoolIM/IBMIM/eclipse
IMDATA: /san/tivoli/NetcoolIM/IBMIMData


System Information
Host Name: pplxnsms02
Operating System Name: Linux
Operating System Version: 4.18.0-425.3.1.el8.x86_64
Operating System Architecture: amd64
Current time: Thu Feb 06 15:03:36 HKT 2025
Free Disk space under NCHOME: 121,808 Mbyte

Products:
Product Name: IBM Tivoli Netcool/OMNIbus
Product Version: 8.1.0
SWGFMIDX: HITNET810
Build Date:
Build Level:

Fix Packs:
Fix Name: IBM_Tivoli_Netcool_OMNIbus
Fix Version: 8.1.0.24
Fix ID: 8.1.0-TIV-NCOMNIbus-FP0024
Type:
SWGFMIDX: HITNET810
Build Date: 2020-10-07
Build Level: 5.50.94

Installation Manager(IM) Information:

[Package Group]
Name: IBM Netcool Core Components
Installation Directory: /san/tivoli/netcool

[Package]
Name: Network Manager Core Components (com.ibm.tivoli.netcool.itnm.core)
Version: 4.2.0.11 (4.2.11.20201026_2006)
Features:
Additional cryptographic routines (non.fips.compliant)

[Package]
Name: Network Manager topology database creation scripts (com.ibm.tivoli.netcool.itnm.dbscripts)
Version: 4.2.0.11 (4.2.11.20201026_2006)
Features:
DB2 Database Server creation scripts (db2.feature)
Oracle Database Server creation scripts (oracle.feature)

[Package]
Name: IBM Tivoli Netcool/OMNIbus (com.ibm.tivoli.omnibus.core)
Version: 8.1.0.24 (5.50.94.20201007_1514)
Features:
Administrator GUI (nco_admin_gui_feature)
Administrator tools (nco_admin_tools_feature)
Bridge server (nco_bridgeserv_feature)
Extensions (nco_extensions_feature)
ObjectServer gateways (nco_g_objserv_feature)
Gateway support (nco_gateways_support_feature)
Netcool MIB Manager (nco_mib_manager_feature)
ObjectServer (nco_objserv_feature)
Operator GUI (nco_operator_gui_feature)
Process agent (nco_pa_feature)
Probe support (nco_probes_support_feature)
Proxy server (nco_proxyserv_feature)
TEC migration (nco_tec_migration)

[Package]
Name: Netcool/OMNIbus Gateway nco-g-jdbc (com.ibm.tivoli.omnibus.integrations.nco-g-jdbc)
Version: 1.7.0.0 (1.7.0.6)
Features:
None.

[Package]
Name: Netcool/OMNIbus Probe nco-p-mttrapd (com.ibm.tivoli.omnibus.integrations.nco-p-mttrapd)
Version: 1.20.0.0 (1.20.0.2)
Features:
None.

[Package]
Name: Netcool/OMNIbus Probe nco-p-tivoli-eif (com.ibm.tivoli.omnibus.integrations.nco-p-tivoli-eif)
Version: 1.13.0.0 (1.13.0.7)
Features:
None.


[Package Group]
Name: IBM WebSphere Application Server V8.5
Installation Directory: /san/tivoli/IBM/WebSphere/AppServer

[Package]
Name: Jazz for Service Management extension for IBM WebSphere 8.5 (com.ibm.tivoli.tacct.psc.install.was85.extension)
Version: 1.1.2.1 (1.1.2001.20201130-0718)
Features:
None.

[Package]
Name: IBM WebSphere Application Server (com.ibm.websphere.BASE.v85)
Version: 8.5.5.18 (8.5.5018.20200910_1821)
Features:
IBM 64-bit WebSphere SDK for Java (com.ibm.sdk.6_64bit)
EJBDeploy tool for pre-EJB 3.0 modules (ejbdeploy)
Embeddable EJB container (embeddablecontainer)
Stand-alone thin clients and resource adapters (thinclient)

[Package]
Name: IBM WebSphere SDK Java Technology Edition (Optional) (com.ibm.websphere.IBMJAVA.v70)
Version: 7.0.9.30 (7.0.9030.20160224_1826)
Features:
None.


[Package Group]
Name: Core services in Jazz for Service Management
Installation Directory: /san/tivoli/IBM/JazzSM

[Package]
Name: IBM Dashboard Application Services Hub (com.ibm.tivoli.tacct.psc.tip.install)
Version: 3.1.3.9 (3.1.3100.20201130-0718)
Features:
Configuration (com.ibm.tivoli.tacct.psc.install.server.feature.tip.config)


[Package Group]
Name: IBM Netcool GUI Components
Installation Directory: /san/tivoli/IBM/OMNIbus_gui

[Package]
Name: Network Manager GUI Components (com.ibm.tivoli.netcool.itnm.gui)
Version: 4.2.0.11 (4.2.11.20201026_2006)
Features:
None.

[Package]
Name: Network Health Dashboard (com.ibm.tivoli.netcool.itnm.gui.health)
Version: 4.2.0.11 (4.2.11.20201026_2006)
Features:
None.

[Package]
Name: Network Manager Reports (com.ibm.tivoli.netcool.itnm.reports)
Version: 4.2.0.11 (4.2.11.20201026_2006)
Features:
None.

[Package]
Name: IBM Tivoli Netcool/OMNIbus Web GUI (com.ibm.tivoli.netcool.omnibus.webgui)
Version: 8.1.0.21 (8.1.21.202012020138)
Features:
Install base features (WebGUI.feature)


Compilation Information
Compilation Date: Tue May 27 10:36:32 BST 2014
Compilation Machine: rhat5es-build1.hursley.ibm.com
Compilation System: Linux 2.6.18-274.17.1.el5 x86_64
Code Generation: PRODUCTION

Shared Object Library Information
libnetcool: 5.50.94
network::ipv6: 5.50.20
libnoam: 5.50.20
libnsecurity: 5.50.66
libnregion: 5.50.76
libnmemstore: 5.50.92
libncmd: 5.50.89
libnstore: 5.50.93
libnproc: 5.50.74
libnauto: 5.50.46
libnobjserv: 5.50.93
libnipc: 5.50.78
libnstk: 5.50.93
libnipc_client: 5.50.78
libniduc_client: 5.50.20
libniduc_server: 5.50.58
libngtk: 5.50.93
libngobjserv: 5.50.93
libnhttpd: 5.50.93

HPE 3PAR 8000 commands

Show User

1
showuser

Show Network

1
shownet -d

Show all LUN assigned in UNIX (AIX)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
root@suuxifdb02:/root# /usr/bin/HP3PARInfo -i (HPELUNInfo -i)

Device File Name Size [MB] Tgt Lun LUN WWN VV Name Code Rev Serial#
======================================================================================================================================================================================
/dev/hdisk2 40960 10000 1000000000000 60002ac0000000000000011100024634 UAT-SAS-AIX-IFDB-raw-01 3.3.1 MU5 4C19331444
/dev/hdisk3 40960 10000 2000000000000 60002ac0000000000000011200024634 UAT-SAS-AIX-IFDB-raw-02 3.3.1 MU5 4C19331444
/dev/hdisk4 40960 10000 3000000000000 60002ac0000000000000011300024634 UAT-SAS-AIX-IFDB-raw-03 3.3.1 MU5 4C19331444
/dev/hdisk5 1048576 10000 4000000000000 60002ac0000000000000011400024634 UAT-SAS-AIX-IFDB-raw-04 3.3.1 MU5 4C19331444
/dev/hdisk6 1048576 10000 5000000000000 60002ac0000000000000011500024634 UAT-SAS-AIX-IFDB-raw-05 3.3.1 MU5 4C19331444
/dev/hdisk7 1049600 10000 6000000000000 60002ac0000000000000011600024634 UAT-SSD-AIX-IFDB-raw-06 3.3.1 MU5 4C19331444
/dev/hdisk8 1048576 10000 7000000000000 60002ac0000000000000011700024634 UAT-SSD-AIX-IFDB-raw-07 3.3.1 MU5 4C19331444
/dev/hdisk9 1048576 10000 8000000000000 60002ac0000000000000011800024634 UAT-SSD-AIX-IFDB-raw-08 3.3.1 MU5 4C19331444
/dev/hdisk10 1048576 10000 9000000000000 60002ac0000000000000011900024634 UAT-SSD-AIX-IFDB-raw-09 3.3.1 MU5 4C19331444
/dev/hdisk11 2097152 10000 b000000000000 60002ac0000000000000010d00024634 UAT-AIX-12-DB-temp 3.3.1 MU5 4C19331444
/dev/hdisk12 204800 10000 a000000000000 60002ac0000000000000011a00024634 UAT-SAS-AIX-IFDB-raw-10 3.3.1 MU5 4C19331444
/dev/hdisk13 30720 10000 c000000000000 60002ac0000000000000011b00024634 UAT-SAS-AIX-IFDB-raw-11 3.3.1 MU5 4C19331444
/dev/hdisk14 409600 10000 d000000000000 60002ac0000000000000011c00024634 UAT-SSD-AIX-IFDB-raw-12 3.3.1 MU5 4C19331444
/dev/hdisk15 1024 20000 e000000000000 60002ac0000000000000012c00024634 UAT-SSD-AIX-IFDB-raw-13 3.3.1 MU5    4C19331444

Show CPG (Common Provisioning Group)

1
2
3
4
5
6
7
8
9
10
11
12
susn3par01 cli% showcpg
----Volumes---- -Usage- -------------(MiB)-------------
Id Name Warn% VVs TPVVs TDVVs Usr Snp Base Snp Free Total
2 CPG_DSU_FC_r1 - 123 31 0 123 35 46514944 35840 6972672 53523456
6 CPG_DSU_SSD_r1 - 9 0 0 9 0 10310656 0 0 10310656
0 FC_r1 - 0 0 0 0 0 0 0 0 0
1 FC_r6 - 0 0 0 0 0 0 0 0 0
3 SSD_r1 - 0 0 0 0 0 0 0 0 0
4 SSD_r5 - 0 0 0 0 0 0 0 0 0
5 SSD_r6 - 0 0 0 0 0 0 0 0 0
-------------------------------------------------------------------------------
7 total 132 35 56825600 35840 6972672 63834112

Show host group

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
susn3par01 cli% showhostset
Id Name Members
0 suuxifdb_set suuxifdb01
suuxifdb02
1 suuxesdb_set suuxesdb01
suuxesdb02
2 suvmesxi_app suvmesxi01
suvmesxi03
suvmesxi05
suvmesxi07
suvmesxi09
suvmesxi11
suvmesxi13
suvmesxi15
3 suvmesxi_mgt suvmesxi08
suvmesxi10
suvmesxi02
suvmesxi04
suvmesxi06
4 suuxifnet_set suuxfnet01
suuxfnet02
5 suwsbkup_set suwsbkup01
suwsbkup02
---------------------------
6 total         21

Show hosts

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
ppsn3par01 cli% showhost
Id Name Persona -WWN/iSCSI_Name- Port
29 ppuxaix1 AIX-legacy C050760C0372000C 2:0:1
28 ppvmesxi15 VMware 1000B47AF16DB62A 1:0:1
1000B47AF16DB624 1:0:2
1000B47AF16DB624 3:0:2
1000B47AF16DB62A 3:0:1
1000B47AF16DB624 2:0:2
1000B47AF16DB62A 2:0:1
1000B47AF16DB62A 0:0:1
1000B47AF16DB624 0:0:2
1 ppuxifdb02 AIX-legacy C050760B6CE20000 0:0:1
C050760B6CE20000 1:0:1
C050760B6CE20000 2:0:1
C050760B6CE20004 0:0:2
C050760B6CE20004 1:0:2
C050760B6CE20004 2:0:2
C050760B6CE20004 3:0:2
C050760B6CE20000 3:0:1
13 ppwsbkup02 WindowsServer 10009440C90A3227 1:0:1
10009440C90A3257 1:0:2
10009440C90A3257 3:0:2
10009440C90A3227 3:0:1
10009440C90A3257 2:0:2
10009440C90A3227 2:0:1
10009440C90A3227 0:0:1
10009440C90A3257 0:0:2
---------------------------------------------------
233 total

Show hosa path

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
ppsn3par01 cli% showhost -pathsum
Id Name WWNs Ports Nodes
29 ppuxaix1 1 1 2
10 ppuxckmu01 2 8 0,1,2,3
11 ppuxckmu02 2 8 0,1,2,3
14 ppuxefap01 2 8 0,1,2,3
15 ppuxefap02 2 8 0,1,2,3
16 ppuxesap01 2 8 0,1,2,3
17 ppuxesap02 2 8 0,1,2,3
18 ppuxesdb01 2 8 0,1,2,3
19 ppuxesdb02 2 8 0,1,2,3
6 ppuxfnet01 2 8 0,1,2,3
7 ppuxfnet02 2 8 0,1,2,3
0 ppuxifdb01 2 8 0,1,2,3
1 ppuxifdb02 2 8 0,1,2,3
8 ppuxinap01 2 8 0,1,2,3
9 ppuxinap02 2 8 0,1,2,3
2 ppvmesxi01 2 8 0,1,2,3
3 ppvmesxi02 2 8 0,1,2,3
4 ppvmesxi03 2 8 0,1,2,3
5 ppvmesxi04 2 8 0,1,2,3
20 ppvmesxi05 2 8 0,1,2,3
21 ppvmesxi06 2 8 0,1,2,3
22 ppvmesxi07 2 8 0,1,2,3
23 ppvmesxi08 2 8 0,1,2,3
24 ppvmesxi09 2 8 0,1,2,3
25 ppvmesxi10 2 8 0,1,2,3
26 ppvmesxi11 2 8 0,1,2,3
27 ppvmesxi13 2 8 0,1,2,3
28 ppvmesxi15 2 8 0,1,2,3
12 ppwsbkup01 2 8 0,1,2,3
13 ppwsbkup02 2 8 0,1,2,3
--------------------------------
30 total

Create a Virtual Volume (VV)

1
susn3par01 cli% createvv CPG_DSU_SSD_r1 UAT-SSD-AIX-IFDB-raw-13 1G

Export VV to a hostset (group of host)

1
susn3par01 cli% createvlun UAT-SSD-AIX-IFDB-raw-13 14 set:suuxifdb_set

Show Physical Disk (PD)

1
2
3
4
5
6
7
spsn3par01 cli% showpd -failed
-Size(MiB)-- ----Ports----
Id CagePos Type RPM State Total Free A B Capacity(GB)
487 21:7:0 FC 10 failed 1142784 0 2:1:1* 3:1:1* 1200
-------------------------------------------------------------------
1 total 1142784 0

Dismiss a physical drive

1
2
3
4
5
6
7
8
spsn3par01 cli% showpd -failed -degraded
-Size(MiB)-- ----Ports----
Id CagePos Type RPM State Total Free A B Capacity(GB)
487 21:7:0? FC 10 failed 1142784 0 ----- ----- 1200
-------------------------------------------------------------------
1 total 1142784 0

spsn3par01 cli% dismisspd 487

Show Service Status

1
2
3
4
5
6
spsn3par01 cli% servicemag status
Cage 21, magazine 7:
The magazine was successfully brought offline by a servicemag start command.
The command completed at Wed Aug 21 22:58:36 2024.
The command started at Wed Aug 21 22:58:26 2024
servicemag start -wait -pdid 487 -- Succeeded

HPE MSA2060 commands

HPE MSA2060 Command List

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
# help
abort copy - Aborts a copy volume operation.
abort replication - Aborts the current replication operation for the specified replication set.
abort scrub - Aborts a media scrub operation.
abort verify - Aborts a media verify operation.
activate firmware - Updates the firmware bundle stored inside the controller.
add disk-group - Creates a disk group using specified disks.
add host-group-members - Adds hosts to a host group.
add host-members - Adds initiators to a host.
add ipv6-address - Adds a static IPv6 address for a controller network port.
add spares - Designates specified available disks to be spares.
add volume-group-members - Adds volumes to a volume group.
check firmware-upgrade-health - Checks that the system is ready for a firmware upgrade.
check update-server - Checks the status of a configured update server.
clear alerts - Clears all the alerts from the active list, and forces a fresh analysis of the system for any active alert conditions.
clear cache - Clears unwritable cache data from both controllers.
clear disk-metadata - Clears metadata from leftover disks.
clear dns-parameters - Clears configured DNS settings for each controller module.
clear events - Clears the event log in controller A, B, or both.
clear expander-status - Clears the counters and status for SAS expander lanes.
clear fde-keys - Clears the lock key ID and import lock ID used with full disk encryption.
clear replication-queue - Clears the replication queue for a specified replication set.
clear update-server-proxy - Clears the username, password, host, and port values configured for the update-server proxy and disables the proxy.
copy volume - Copies all data in a specified source volume to a destination volume.
create certificate - Creates or removes a custom security certificate.
create host-group - Creates a host group that includes specified hosts.
create host - Creates a host with an associated name.
create peer-connection - Creates a peer connection between two storage systems.
create remote-system - Creates a persistent association with a remote storage system.
create replication-set - Creates a replication set for a specified volume or volume group.
create schedule - Schedules a task to run automatically.
create snapshots - Creates a snapshot of each specified source volume.
create task - Creates a task that can be scheduled.
create user-group - Creates a user group in the storage system to match an LDAP group.
create user - Creates a user account.create volume-group - Creates a volume group that includes specified volumes.
create volume-set - Creates a specified number of volumes in a pool.
create volume - Creates a volume in a pool.
delete all-snapshots - Deletes all snapshots associated with a specified source volume.
delete host-groups - Deletes specified host groups and optionally all hosts in those host groups.
delete hosts - Deletes specified hosts that are not in a host group.
delete initiator-nickname - Deletes manually created initiators or the nicknames of discovered initiators.
delete peer-connection - Deletes a peer connection between two storage systems.
delete pools - Deletes specified pools.
delete remote-system - Deletes the persistent association with a remote system.
delete replication-set - Deletes a replication set.
delete schedule - Deletes a task schedule.
delete snapshot - Deletes specified snapshots.
delete task - Deletes a task.
delete user-group - Deletes an LDAP user group.
delete user - Deletes a user account.
delete volume-groups - Deletes specified volume groups and optionally all volumes in those groups.
delete volumes - Deletes specified volumes.
dequarantine - Removes a disk group from quarantine.
exit - Log off and exit the CLI session.
expand disk-group - Adds disks to a disk group to expand its storage capacity.
expand volume - Expands a base volume.
fail - Forces the partner controller module to crash for a non-maskable interrupt.
help - Shows brief help for all available commands or full help for a specific command.
map volume - Maps volumes to initiators.
meta - In API mode only, shows all property metadata for objects.
ping - Tests communication with a remote host.
query metrics - Shows one or more collected data points for a list of metrics.
query peer-connection - Queries a storage system to potentially use in a peer connection and shows information about the storage system via the in-band query.
recover replication-set - Provides options to recover a replication set after a disaster.
release volume - Clears initiator registrations and releases persistent reservations for all or specified volumes.
remote - Runs a command on a remote system that is associated with the local system.
remove disk-groups - Removes specifieremove host-group-members - Removes specified hosts from a host group.
remove host-members - Removes specified initiators from a host.
remove ipv6-address - Removes a static IPv6 address from a controller network port.
remove spares - Removes specified spares.
remove volume-group-members - Removes volumes from a volume group.
replicate - Initiates replication of volumes in a replication set.
rescan - This command forces rediscovery of disks and enclosures in the storage system.
reset all-statistics - Resets performance statistics for both controllers.
reset ciphers - Clears user-supplied ciphers and sets the cipher list to the system default.
reset controller-statistics - Resets performance statistics for controllers.
reset disk-error-statistics - Resets error statistics for all or specified disks.
reset disk-statistics - Resets performance statistics for disks.
reset dns-management-hostname - Resets each controller module's management hostname to the factory default.
reset host-link - Resets specified controller host ports (channels).
reset host-port-statistics - Resets performance statistics for controller host ports.
reset pool-statistics - Clears resettable performance statistics for pools, and resets timestamps for those statistics.
reset smis-configuration - Resets the SMI-S configuration files.
reset snapshot - Replaces the data in a standard snapshot with the current data from its parent volume.
reset volume-statistics - Resets performance statistics for all or specified volumes.
restart mc - Restarts the Management Controller in a controller module.
restart sc - Restarts the Storage Controller in a controller module.
restore defaults - Restores the default configuration to the controllers.
resume replication-set - Resumes the replication operations for the specified replication set.
rollback volume - Replaces the data in a parent volume with the data from one of its snapshots.
scrub disk-groups - Analyzes specified disk groups to find and fix errors.
set advanced-settings - Sets advanced system configuration parameters.
set alert - Acknowledges specified alerts.
set ciphers - Configures a cipher list that the storage system can use to securely communicate with hosts through HTTPS or SMI-S.
set cli-parameters - Sets options that control CLI behavior.
set controller-date - Sets the date and time parameters for the system.
set debug-log-parameters - Sets the types of debug messages to include in the Storage Controller debug log.
set disk-group - Changes parameters for a specified disk group.
set disk-parameters - Sets parameters that affect disk operation.
set disk - Performs a secure erase onset dns-management-hostname - Sets a domain hostname for each controller module to identify it for management purposes.
set dns-parameters - Configures settings to resolve domain names using the Domain Name Service (DNS).
set email-parameters - Sets SMTP notification parameters for events and managed logs.
set enclosure - Sets an enclosure's name, location, rack number, and rack position.
set expander-phy - Disables or enables a specific PHY.
set fde-import-key - Sets or changes the import lock key for the use of full disk encryption.
set fde-lock-key - Sets or changes the lock key for the use of full disk encryption.
set fde-state - Changes the overall state of the system for the use of full disk encryption.
set host-group - Sets the name of a host group.
set host-parameters - Sets controller host-port parameters for communication with attached hosts.
set host - Sets the name of a host and optionally the profile of the host and the initiators it contains.
set initiator - Sets the name of an initiator and optionally its profile.
set ipv6-network-parameters - Sets IPv6 parameters for the network port in each controller module.
set ldap-parameters - Configures the LDAP server parameters required to authenticate and authorize LDAP users.
set led - Turns a specified device's identification LED on or off to help you locate the device.
set network-parameters - Sets parameters for the network port in each controller module.
set ntp-parameters - Sets Network Time Protocol (NTP) parameters for the system.
set password - Sets a user's password for system interfaces (such as the CLI).
set peer-connection - Modifies a peer connection between two systems.
set pool - Sets parameters for a pool.
set prompt - Sets the prompt for the current CLI session.
set protocols - Enables or disables management services and protocols.
set remote-system - Changes remote-system credentials stored in the local system.
set replication-set - Changes parameters for a replication set.
set schedule - Changes parameters for a specified schedule.
set snapshot-space - Sets the snapshot space usage as a percentage of the pool and thresholds for notification.
set snmp-parameters - Sets SNMP parameters for event notification.
set syslog-parameters - Sets remote syslog notification parameters for events.
set system - Sets the system's name, contact person, location, and description.
set task - Changes parameters for a TakeSnapshot task.
set update-server - Configures an update server and a proxy, if required.
set user-group - Changes the settings for an LDAP user group.
set user - Changes preferences for a specified user for the session or permanently.
set volume-cache-parameters - Sets caset volume-group - Sets the name of a volume group.
set volume - Changes parameters for a volume.
show advanced-settings - Shows the settings for advanced system-configuration parameters.
show alert-condition-history - Shows the history of the alert conditions that have generated alerts.
show alerts - Shows information about the active alerts on the storage system.
show audit-log - Shows audit log data.
show cache-parameters - Shows cache settings and status for the system and optionally for a volume.
show certificate - Shows the status of the system's security certificate.
show ciphers - Shows the ciphers that the system is using to securely communicate with hosts.
show cli-parameters - Shows the current CLI session preferences.
show configuration - Shows system configuration information.
show controller-date - Shows the system's current date and time.
show controller-statistics - Shows live performance statistics for controller modules.
show controllers - Shows information about each controller module.
show debug-log-parameters - Shows which debug message types are enabled (On) or disabled (Off) for inclusion in the Storage Controller debug log.
show disk-group-statistics - Shows live performance statistics for disk groups.
show disk-groups - Shows information about disk groups.
show disk-parameters - Shows disk settings.
show disk-statistics - Shows live or historical performance statistics for disks.
show disks - Shows information about all disks or disk slots in the storage system.
show dns-management-hostname - Shows the management hostname for each controller module.
show dns-parameters - Shows configured DNS settings for each controller module.
show email-parameters - Shows email (SMTP) notification parameters for events and managed logs.
show enclosures - Shows information about the enclosures in the storage system. Full detail available in API output only.
show events - Shows events logged by each controller in the storage system.
show expander-status - Shows diagnostic information relating to SAS Expander Controller physical channels, known as PHY lanes.
show fans - Shows information about each fan in the storage system.
show fde-state - Shows full disk encryption information for the storage system.
show firmware-bundles - Displays the active firmware bundle and an available firmware bundle stored in the system's controller modules.
show firmware-update-status - Displays the current status of any firmware update on the system.
show frus - Shows SKU and FRU (field-replaceable unit) information for the storage system.
show host-groups - Shows information about host groups and hosts.
show host-phy-statistics - Shows diagnostic information relating to SAS controller physical channels, known as PHY lanes, for each host port.
show host-port-statistics - Shows livshow initiators - Shows information about initiators.
show inquiry - Shows inquiry data for each controller module.
show ipv6-addresses - Shows static IPv6 addresses assigned to each controller's network port.
show ipv6-network-parameters - Shows the IPv6 settings and health of each controller module's network port.
show ldap-parameters - Shows LDAP settings.
show license - Shows the status of licensed features in the storage system.
show maps - Shows information about mappings between volumes and initiators.
show metrics-list - Shows a list of all available types of metrics in the system.
show network-parameters - Shows the settings and health of each controller module’s network port.
show ntp-status - Shows the status of the use of Network Time Protocol (NTP) in the system.
show peer-connections - Shows information about a peer connection between two systems.
show pool-statistics - Shows live or historical performance statistics for pools.
show pools - Shows information about pools.
show ports - Shows information about host ports in each controller.
show power-supplies - Shows information about each power supply in the storage system.
show protocols - Shows which management services and protocols are enabled or disabled.
show provisioning - Shows information about how the system is provisioned.
show redundancy-mode - Shows the redundancy status of the system.
show refresh-counters - Deprecated
show remote-systems - Shows information about remote systems associated with the local system.
show replication-sets - Shows information about replication sets in the peer connection.
show replication-snapshot-history - Shows information about the snapshot history for all replication sets or a specific replication set.
show sas-link-health - Shows the condition of SAS expansion-port connections.
show schedules - Shows information about task schedules.
show sensor-status - Shows information about each environmental sensor in each enclosure.
show sessions - Shows information about user sessions on the storage system.
show shutdown-status - Shows whether each Storage Controller is active or shut down.
show snapshot-space - Shows snapshot-space settings for each pool.
show snapshots - Shows information about snapshots.
show snmp-parameters - Shows SNMP settings for event notification.
show syslog-parameters - Shows syslog notification parameters for events and managed logs.
show system-parameters - Shows certain storage system settings and configuration limits.
show system - Shows information about the storage system.
show tasks - Shows information about show tier-statistics - Shows live performance statistics for tiers.
show tiers - Shows information about tiers.
show unwritable-cache - Shows the percentage of unwritable data in the system.
show update-server - Shows settings for a configured update server proxy.
show user-groups - Shows configured LDAP user groups.
show users - Shows configured user accounts.
show versions - Shows firmware and hardware version information for the system.
show volume-copies - Shows information about in-progress copy volume operations.
show volume-groups - Shows information about specified volume groups or all volume groups.
show volume-names - Shows volume names and serial numbers.
show volume-reservations - Shows persistent reservations for all or specified volumes.
show volume-statistics - Shows live performance statistics for all or specified volumes.
show volumes - Shows information about volumes.
show workload - Calculates the system's I/O workload, and shows the relationship between the workload and the amount of storage capacity used.
shutdown - Shuts down the Storage Controller in a controller module.
start metrics - Starts retention of specified dynamic metrics.
stop metrics - Stops data retention for specified dynamic metrics.
suspend replication-set - Suspends the replication operations for the specified replication set.
test - Sends a test message to configured destinations for event notification and managed logs.
trust - Enables an offline or quarantined-offline disk group to be brought online for emergency data recovery.
unfail controller - Allows the partner controller module to recover from a simulated failure performed with the fail command (which requires the standard role).
unmap volume - Deletes mappings for specified volumes.
verify disk-groups - Analyzes redundant disk groups to find inconsistencies between their redundancy data and their user data.
whoami - Shows domain information for the current user.

Show Maps

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# show maps

Volume View [Serial Number (00c0ff65a8070000d039756201000000) Name (QGOUAT-X86-OS-01) ] Mapping:
Ports LUN Access Identifier Nickname Profile
--------------------------------------------------------------------------------------
1-4 1 read-write 00c0ff65a7df000043547e6201010000 quvmesxi_set.*.* Standard

Volume View [Serial Number (00c0ff65a8070000eee2686301000000) Name (QGOUAT-X86-SHARE-01) ] Mapping:
Ports LUN Access Identifier Nickname Profile
--------------------------------------------------------------------------------------
1-4 3 read-write 00c0ff65a7df000043547e6201010000 quvmesxi_set.*.* Standard

Volume View [Serial Number (00c0ff65a7df0000b864466401000000) Name (QGOUAT-X86-SHARE-02) ] Mapping:
Ports LUN Access Identifier Nickname Profile
--------------------------------------------------------------------------------------
1-4 4 read-write 00c0ff65a7df000043547e6201010000 quvmesxi_set.*.* Standard

Volume View [Serial Number (00c0ff65a7df0000443a756201000000) Name (QGOUAT-X86-MGT-50) ] Mapping:
Ports LUN Access Identifier Nickname Profile
--------------------------------------------------------------------------------------
1-4 2 read-write 00c0ff65a7df000043547e6201010000 quvmesxi_set.*.* Standard


Success: Command completed successfully. (2024-07-22 15:23:19)

Show Initiators

1
2
3
4
5
6
7
8
9
10
11
# show initiators
Nickname Discovered Mapped Profile Host Type ID
-------------------------------------------------------------------------
quvmesxi02_s2p0 Yes Yes Standard FC 100008f1eac08821
quvmesxi02_s3p0 Yes Yes Standard FC 100008f1eac078c3
quvmesxi04_s2p0 Yes Yes Standard FC 100008f1eac0887b
quvmesxi04_s3p0 Yes Yes Standard FC 100008f1eac07818
quvmesxi06_s2p0 Yes Yes Standard FC 100008f1eac0883c
quvmesxi06_s3p0 Yes Yes Standard FC 100008f1eac088de
-------------------------------------------------------------------------
Success: Command completed successfully. (2024-07-22 15:22:54)

Show Volumes

1
2
3
4
5
6
7
8
9
# show volumes
Pool Name Total Size Alloc Size Type Health Reason Action
--------------------------------------------------------------------------
B QGOUAT-X86-MGT-50 5199.9GB 3560.8GB base OK
A QGOUAT-X86-OS-01 2299.9GB 1203.6GB base OK
A QGOUAT-X86-SHARE-01 3999.9GB 3997.9GB base OK
B QGOUAT-X86-SHARE-02 2499.9GB 2282.3GB base OK
--------------------------------------------------------------------------
Success: Command completed successfully. (2024-07-22 15:18:04)

Show System

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# show system
System Information
------------------
System Name: qusnmsas01
System Contact: Support
System Location: 17/F, Queensway Government Office, Admiralty, Hong Kong
System Information: QGO-UAT
Midplane Serial Number: 00C0FF65A081
Vendor Name: HPE
Product ID: MSA 2060 FC
Product Brand: MSA Storage
Enclosure Count: 1
Health: OK
Health Reason:
Other MC Status: Operational
PFU Status: Idle
Supported Locales: English (English), Spanish (español), French (français), German (Deutsch), Italian (italiano), Japanese (日本語), Korean (한국어), Dutch (Nederlands), Chinese-Simplified (简体中文), Chinese-Traditional (繁體中文)


Success: Command completed successfully. (2024-07-22 15:17:42)

Virtual Tape Libraries: Advantages and Disadvantages

Virtual Tape Libraries: Advantages and Disadvantages

Virtual Tape Libraries (VTLs) are long-term storage solutions that simulate data tape hardware while using an array of hard drives (HDDs) for the actual storage.

Many organizations have established processes for handling data archives. Enterprises may be reliant on certain backup software or recovery processes, which makes traditional tape migration impractical or undesirable.

A VTL offers a “best of both worlds” middle ground; the organization can retain their current backup/archival strategy while improving efficiency and reducing the time spent on data restoration. The VTL can be coordinated with traditional tape backups to reduce the physical space utilization of onsite hardware.

Of course, if virtual tape libraries were perfect, there would be no reason to use actual tape cartridges — but physical tape libraries continue to capture market share from HDD-based systems.

But VTLs make sense for many organizations, particularly enterprises with relatively well-defined backup/archive protocols. Below, we’ll discuss some of the major advantages and disadvantages of VTL utilization.

Advantages of Virtual Tape Libraries: Faster Restoration, Lower Deployment Costs

Generally, VTLs have lower initial deployment costs than new tape hardware, although the cost of implementation can vary. VTL solutions can function with all popular backup/archival applications, and the enterprise won’t need to change its practices to put the new system to use.

Other major advantages of VTLs:

  • The entire storage capacity of the disk array is available.
  • RAID can provide several layers of redundancy, but with optimal data deduplication for better overall storage utilization.
  • Hard drives are generally more efficient for read processes than legacy tape cartridges, especially when utilizing RAID.
  • VTL can significantly reduce disaster recovery time, particularly when compared with legacy tape formats.
  • VTLs support random access, while most legacy tape formats only support sequential data access.
  • In crowded datacenters, VTLs may utilize physical space more effectively.

But while virtual tape libraries are effective for many applications, they’re not without their faults — and advances in tape storage technology have nullified some of the benefits.

Disadvantages of Virtual Tape Libraries: Less Resiliency, High Long-Term Costs

One of the major advantages of tape is air-gapping, which provides protection against ransomware and other data security hazards. An air-gapped backup can be isolated from the rest of the data storage infrastructure, ensuring that the enterprise has a recovery option in worst-case scenarios.

VTLs are not air-gapped, nor are they intended to be transported outside of the data center; the VTL essentially acts as an onsite archive. High-capacity tapes can be easily taken off site or offline, so they’re ideal for creating “golden copies,” which are crucial for protecting against malware.

Other advantages of data tape cartridges over VTLs:

  • Current tape cartridge formats are significantly less expensive per-gigabyte than hard drives. At the time of writing, LTO-9 has a cost of about $0.0058/GB, and that price will continue to decrease in future generations.
  • Modern tape cartridges can utilize file systems such as LTFS to mimic random access (though it’s worth noting that “mimic” is a key phrase — LTFS, while powerful, is still limited to the physics of the tape cartridge).
  • In most operations, VTLs are not a full replacement for tapes; they’re intended as a complement to tape infrastructure. As a result, disaster recovery may not be any faster with a VTL in place — and the complexity of VTL implementation may actually add to the time needed for recovery.

Creating a Strategy for Data Archiving and Disaster Recovery

Ultimately, most enterprises require a combination of VTLs and physical tapes. To optimize the benefits of VTLs or physical tapes, the storage infrastructure must be designed for the organization’s specific needs. Any new implementation must be planned carefully, particularly if the goal is to limit tape hardware or to migrate away from a certain tape format.

If you’re considering a switch to VTL, or if you’re looking for ways to optimize your backup/archival processes.

With an extensive library of tape hardware, access to hundreds of current & legacy backup applications, and decades of experience, we create sustainable, cost-efficient strategies for data migration. Contact us today to schedule a consultation.

Clone rootvg (alt_disk_copy)

List all rootvg disks

1
2
3
root@temp02:/ # lspv
hdisk1 00c21251fa668def rootvg active
hdisk0 00c21251f4de5f81 rootvg active

List all file system across rootvg, hdisk0, and hdisk1

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
root@temp02:/ # lsvg -l rootvg
rootvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
hd5 boot 1 2 2 closed/syncd N/A
hd6 paging 2 4 2 open/syncd N/A
hd8 jfs2log 1 2 2 open/syncd N/A
hd4 jfs2 20 40 2 open/syncd /
hd2 jfs2 10 20 2 open/syncd /usr
hd9var jfs2 1 2 2 open/syncd /var
hd3 jfs2 28 56 2 open/syncd /tmp
hd1 jfs2 1 2 2 open/syncd /home
hd10opt jfs2 20 40 2 open/syncd /opt
hd11admin jfs2 1 2 2 open/syncd /admin
lg_dumplv sysdump 4 4 1 open/syncd N/A
livedump jfs2 1 2 2 open/syncd /var/adm/ras/livedump
lvsource jfs2 72 72 1 closed/syncd /source

root@temp02:/ # lspv -l hdisk0
hdisk0:
LV NAME LPs PPs DISTRIBUTION MOUNT POINT
livedump 1 1 00..01..00..00..00 /var/adm/ras/livedump
hd11admin 1 1 00..00..01..00..00 /admin
lg_dumplv 4 4 00..04..00..00..00 N/A
hd10opt 20 20 00..00..20..00..00 /opt
hd3 28 28 00..00..28..00..00 /tmp
hd1 1 1 00..00..01..00..00 /home
hd2 10 10 00..00..10..00..00 /usr
hd9var 1 1 00..00..01..00..00 /var
hd8 1 1 00..00..01..00..00 N/A
hd4 20 20 00..00..20..00..00 /
hd5 1 1 01..00..00..00..00 N/A
hd6 2 2 00..02..00..00..00 N/A

root@temp02:/ # lspv -l hdisk1
hdisk1:
LV NAME LPs PPs DISTRIBUTION MOUNT POINT
livedump 1 1 00..01..00..00..00 /var/adm/ras/livedump
lvsource 72 72 00..72..00..00..00 /source
hd11admin 1 1 00..00..01..00..00 /admin
hd10opt 20 20 00..00..20..00..00 /opt
hd3 28 28 00..00..28..00..00 /tmp
hd1 1 1 00..00..01..00..00 /home
hd2 10 10 00..00..10..00..00 /usr
hd9var 1 1 00..00..01..00..00 /var
hd8 1 1 00..00..01..00..00 N/A
hd4 20 20 00..00..20..00..00 /
hd5 1 1 01..00..00..00..00 N/A
hd6 2 2 00..02..00..00..00 N/A

Unmirror rootvg (remove hdisk0)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
root@temp02:/ # unmirrorvg rootvg hdisk0
0516-1246 rmlvcopy: If hd5 is the boot logical volume, please run 'chpv -c <diskname>'
as root user to clear the boot record and avoid a potential boot
off an old boot image that may reside on the disk from which this
logical volume is moved/removed.
0516-1804 chvg: The quorum change takes effect immediately.
0516-1144 unmirrorvg: rootvg successfully unmirrored, user should perform
bosboot of system to reinitialize boot records. Then, user must modify
bootlist to just include: hdisk1.

root@temp02:/ # lspv
hdisk1 00c21251fa668def rootvg active
hdisk0 00c21251f4de5f81 rootvg active

root@temp02:/ # bosboot -ad hdisk1

bosboot: Boot image is 61489 512 byte blocks.

Verify the file system in hdisk0

1
2
3
4
5
6
7
8
9
10
11
12
# lspv -l hdisk0
hdisk0:
LV NAME LPs PPs DISTRIBUTION MOUNT POINT
lg_dumplv 4 4 00..04..00..00..00 N/A

# mklvcopy lg_dumplv 2 hdisk0 hdisk1

# syncvg -l lg_dumplv

# rmlvcopy lg_dumplv 1 hdisk0

# lspv -l hdisk0

Remove hdisk0 from rootvg

1
2
3
4
5
# reducevg rootvg hdisk0

# lspv
hdisk1 00c21251fa668def rootvg active
hdisk0 00c21251f4de5f81 None

Clone rootvg from hdisk1 -> hdisk0

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
# alt_disk_copy -d  hdisk0
Calling mkszfile to create new /image.data file.
Checking disk sizes.
Creating cloned rootvg volume group and associated logical volumes.
Creating logical volume alt_hd5.
Creating logical volume alt_hd6.
Creating logical volume alt_hd8.
Creating logical volume alt_hd4.
Creating logical volume alt_hd2.
Creating logical volume alt_hd9var.
Creating logical volume alt_hd3.
Creating logical volume alt_hd1.
Creating logical volume alt_hd10opt.
Creating logical volume alt_hd11admin.
Creating logical volume alt_lg_dumplv.
Creating logical volume alt_livedump.
Creating /alt_inst/ file system.
Creating /alt_inst/admin file system.
Creating /alt_inst/home file system.
Creating /alt_inst/opt file system.
Creating /alt_inst/tmp file system.
Creating /alt_inst/usr file system.
Creating /alt_inst/var file system.
Creating /alt_inst/var/adm/ras/livedump file system.
Generating a list of files
for backup and restore into the alternate file system...
Backing-up the rootvg files and restoring them to the alternate file system...
Modifying ODM on cloned disk.
Building boot image on cloned disk.
forced unmount of /alt_inst/var/adm/ras/livedump
forced unmount of /alt_inst/var/adm/ras/livedump
forced unmount of /alt_inst/var
forced unmount of /alt_inst/var
forced unmount of /alt_inst/usr
forced unmount of /alt_inst/usr
forced unmount of /alt_inst/tmp
forced unmount of /alt_inst/tmp
forced unmount of /alt_inst/opt
forced unmount of /alt_inst/opt
forced unmount of /alt_inst/home
forced unmount of /alt_inst/home
forced unmount of /alt_inst/admin
forced unmount of /alt_inst/admin
forced unmount of /alt_inst
forced unmount of /alt_inst
Changing logical volume names in volume group descriptor area.
Fixing LV control blocks...
Fixing file system superblocks...
Bootlist is set to the boot disk: hdisk0 blv=hd5

Verify the boot order

1
# bootlist -m normal hdisk[1]

Destroy altinst_rootvg / old_rootvg

1
2
3
4
5
6
7
8
9
10
# lspv
hdisk0 00c86220cabdc88c rootvg active
hdisk1 00c86220d9e58aff altinst_rootvg

# alt_rootvg_op -X altinst_rootvg
Bootlist is set to the boot disk: hdisk0 blv=hd5

# lspv
hdisk0 00c86220cabdc88c rootvg active
hdisk1 00c86220d9e58aff None

Re-form AIX mirror

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
# extendvg -f rootvg hdisk1

# mirrorvg '-S' rootvg hdisk0 hdisk1
0516-1804 chvg: The quorum change takes effect immediately.
0516-1126 mirrorvg: rootvg successfully mirrored, user should perform
bosboot of system to initialize boot records. Then, user must modify
bootlist to include: hdisk0 hdisk1.

# bosboot -a -d hdisk0
trustchk: Verification of attributes failed: /etc/vfs
: mode

bosboot: Boot image is 61489 512 byte blocks.

# bosboot -a -d hdisk1
trustchk: Verification of attributes failed: /etc/vfs
: mode

bosboot: Boot image is 61489 512 byte blocks.

# bootlist -m normal hdisk0 hdisk1
# bootlist -m normal -o
hdisk0 blv=hd5 pathid=0
hdisk1 blv=hd5 pathid=0

# mklvcopy lg_dumplv 2 hdisk0 hdisk1

# syncvg -l lg_dumplv

# lsvg -l rootvg
rootvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
hd5 boot 1 2 2 closed/syncd N/A
hd6 paging 4 8 2 open/syncd N/A
hd8 jfs2log 1 2 2 open/syncd N/A
hd4 jfs2 96 192 2 open/syncd /
hd2 jfs2 80 160 2 open/syncd /usr
hd9var jfs2 80 160 2 open/syncd /var
hd3 jfs2 80 160 2 open/stale /tmp
hd1 jfs2 40 80 2 open/stale /home
hd10opt jfs2 80 160 2 open/stale /opt
hd11admin jfs2 40 80 2 open/stale /admin
lg_dumplv sysdump 44 88 2 open/syncd N/A
livedump jfs2 8 16 2 open/stale /var/adm/ras/livedump
lvsource jfs2 560 1120 2 open/stale /source
paging00 paging 192 384 2 open/stale N/A
lvu01 jfs2 784 1568 2 open/stale /u01
auditlv jfs2 16 32 2 open/stale /audit
lvnmon jfs2 16 32 2 open/stale /nmon

* wait until all LV STATE = synced

What is CrowdStrike, and what happened?

The cybersecurity giant CrowdStrike brought down thousands of systems after pushing a faulty update to Windows machines.

On Friday (19, July, 2024) morning, some of the biggest airlines, TV broadcasters, banks, and other essential services came to a standstill as a massive outage rippled across the globe. The outage, which has brought the Blue Screen of Death upon legions of Windows machines across the globe, is linked to just one software company: CrowdStrike.

CrowdStrike plays an important role in helping companies find and prevent security breaches, billing itself as having the “fastest mean time” to detect threats. Since its launch in 2011, the Texas-based company has helped investigate major cyberattacks, such as the Sony Pictures hack in 2014, as well as the Russian cyberattacks on the Democratic National Committee in 2015 and 2016. As of Thursday evening, CrowdStrike’s valuation was upwards of $83 billion.

It also has around 29,000 customers, with more than 500 on the list of the Fortune 1000, according to CrowdStrike’s website.

But that popularity put it in the position to wreak havoc when something went wrong, with systems using CrowdStrike and Windows-based hardware falling offline in droves this morning. CrowdStrike CEO George Kurtz said on Friday that the company is “actively working with customers impacted by a defect found in a single content update for Windows hosts” while emphasizing that the issue isn’t linked to a cyberattack. It also doesn’t affect Mac or Linux machines.

The July 19th outage is tied to CrowdStrike’s flagship Falcon platform, a cloud-based solution that combines multiple security solutions into a single hub, including antivirus capabilities, endpoint protection, threat detection, and real-time monitoring to prevent unauthorized access to a company’s system.

The update in question appears to have installed faulty software onto the core Windows operating system, causing systems to get stuck in a boot loop. Systems are showing an error message that says, “It looks like Windows didn’t load correctly,” while giving users the option to try troubleshooting methods or restart the PC. Many companies, including this airline in India, have resorted to the good old-fashioned way of doing things by hand.

“Our software is extremely interconnected and interdependent,” Lukasz Olejnik, an independent cybersecurity researcher, consultant, and author of the book Philosophy of Cybersecurity, tells The Verge. “But in general, there are plenty of single points of failure, especially when software monoculture exists at an organization.”

Although CrowdStrike has deployed a fix, getting things up and running won’t be a simple task. Olejnik tells The Verge that this issue could take “days to weeks” to resolve because IT administrators may have to have physical access to a device to get them working again. How fast that happens depends on the size and resources of a company’s IT team. “Some systems in certain specific circumstances may be unrecoverable, but I assume that the majority will be recovered,” Olejnik adds.