TSM v6.4 client on Ubuntu

I recently came across this and it took me some time to find the right way how to install a new TSM client on Ubuntu. The following procedure worked for me:

Download the latest TSM client:
ftp://public.dhe.ibm.com/storage/tivoli-storage-management/patches/client/v6r4/Linux/LinuxX86/BA/v640/

Install alien (rpm2deb converter tool):
apt-get install alien libstdc++5 ia32-libs

create the .deb install packages and install them:

alien -k gskcrypt64-8.0.14.14.linux.x86_64.rpm
alien -k gskssl64-8.0.14.14.linux.x86_64.rpm
alien -k TIVsm-API64.x86_64.rpm
alien -k TIVsm-BA.x86_64.rpm
dpkg -i *.deb

Link the Libraries to the right places so that the TSM client can find them:

ln -s /opt/tivoli/tsm/client/api/bin64/libgpfs.so /lib/ ln -s /opt/tivoli/tsm/client/api/bin64/libdmapi.so /lib/ ln -s /usr/local/ibm/gsk8_64/lib64/libgsk8cms_64.so /lib/ ln -s /usr/local/ibm/gsk8_64/lib64/libgsk8ssl_64.so /lib/ ln -s /usr/local/ibm/gsk8_64/lib64/libgsk8sys_64.so /lib/ ln -s /usr/local/ibm/gsk8_64/lib64/libgsk8iccs_64.so /lib/

ln -s /opt/tivoli/tsm/client/lang/EN_US /opt/tivoli/tsm/client/ba/bin/

Configure your dsm.opt & dsm.sys + your scheduler and so forth – then you are ready to go…

Hope this helps somebody ๐Ÿ˜‰

Advertisements

Attaching HP EVA LUNs to VIOS

I am currently doing a PS700 deployment. The VIOS is installed on the 2 internal disks of the Blade, but the LPARs will sit on SAN storage that is mapped through via the VIO server.

Now the storage administrator mapped half of the LUNs from a Netapp storage subsystem and the other half from an HP EVA. While the Netapp LUNs are natively supported by the VIOS (2.1.3.10-FP23 / AIX 6100-05) the EVA LUNs only showed up as “Other FC SCSI Disk Drive”.

It took me quite a while (mostly because when I was connected to the VPN I couldn’t access the Internet ;)) to figure out what to do here, but eventually found the following procedure to enable the native mpio support from AIX for the EVA LUNs:


attached HP (for dev/test/train) and netapp disks to all VIOS servers:
delete all defined disks before installing the HP ODM entry:
lsdev -Ccdisk|awk {print }|grep -w -v hdisk0|grep -w -v hdisk1|while read line
do
rmdev -dl $line
done
untarย HP_HSV_AIX_MPIO_1041U.tar
run "inutoc ." in the directory where you untar'ed the file
run smitty install_all and use the menu to install all filesets
run cfgmgr to discover the eva disks

Hope this helps someone out there ๐Ÿ˜‰

IBM Storage Optimisation Breakfast – Auckland

Last week we went to the “IBM Storage Optimisation Breakfast” here in Auckland. As you can guess that Breakfast started quite early ๐Ÿ˜‰
Anyway, through the rain we made it there in time.

8:00am Registration / Breakfast
8:25am Welcome & Overview of Current Storage Technologies, IBM ANZ Executive
9:00am IBM Customer Case Study
9:40am Storage Future Trends Presentation by Tony Pearson, Master Inventor
10:20am Questions & Closing

The breakfast itself was typical NZ/Continental (I thought I mention that here ;-)).

The two most interesting Agenda points were the IBM Customer Case Study and the Storage Future Trends presentation from Tony Pearson.

The case study was presented by the Customers (big Telecommunications company here in NZ) Storage architect and was very interesting.
He spoke about the challenges they faced (Vendor lock-in, migration problems, …) and how they eventually decided to work around that. They ended up buying IBMs SVC and a DS8700 as tier one storage backend and are so far impressed by the capabilities and the migration speed. He mentioned before SVC it took them about 4 years to decommission an old Storage array after the new one had been procured, while SVC of course simplifies storage migration with the online migration features.

The next presentation was done by Tony Pearson.
Summary from his Blog:
Tony Pearson is a Master Inventor and Senior IT Storage Consultant for the IBM System Storage product line at the IBM Executive Briefing Center in Tucson Arizona, and featured contributor to IBM's Virtual Briefing Center. Tony has been working in IBM storage for over 20 years, and is author of the book Inside System Storage: Volume I. This blog is for the open exchange of ideas relating to storage and storage networking hardware, software and services.

Tony spoke about 3 new Trends in the Storage & IT world that he thinks will eventually become standard.

SSD + SATA = New storage architecture

Tony said that the combination of these two devices into a Storage subsystem will soon become standard in the industry for the following reasons:

SSD performance per Watt is extremely higher than a 15k RPM Harddisk (Watt/performance 2,970 IOPS/Watt compared to 17.5 IOPS/Watt for a 15k Disk)
SATA disk offers significant savings in hardware pricing & energy consumption compared to SAS/FC disks
SSD has the disadvantage of being much more expensive and smaller than standard Harddisks

So combining SSDs with SATA drives will enable customers to benefit from the low price/energy consumption of a commodity SATA harddrive and the high performance of a SSD.

We can already see that this kind of device is getting traction by looking at XiV which consists of standard of the shelf hardware that is accelerated by by 15x8GB RAM. You could say that the memory of each node acts as a SSD here.

FC HBA + NIC = CNA (Converged Network Adapters)

The other very interesting point he presented was a big possible change in networking. Instead of having to patch 2 cables for FC disk, 2 cables for FC tape and 2 or more cables for Ethernet (management, gigabit, 10G ethernet), all this can be replaced by a CNA (Converged Network Adapters). The CNA appears to the OS as a NIC & Fibre HBA and routes both kinds of traffic to a top-of-the-rack-switch (8GB FC capable & 10GB Ethernet) that will then demultiplex the packets and de-stage them to the different networks (FC & Ethernet). This technology is already available for some servers and it can be developed further by eventually unifying the two Networks.

Cloud computing

Cloud computing was the last point on Tony’s agenda for that day. But not many new things were presented. He basically said that he thinks that sooner or later all customers will use the Cloud as if they use a company for accounting or cleaning. IT will become a service and it will be done by highly specialized ISPs whom will drive efficiency and cost effectiveness.

TSMManager

Hello there,

Everybody who has worked with TSM before knows that it can be quite complicated since IBM discontinued the TSM server web console with TSM 5.3 (which was unfortunate for Users and IBM). The web console was replaced by the ISC (Integrated Solutions Console) which in the first release was just a joke…

Finally after a long while of development with IBM TSM 6.1 IBM introduced the Administrative console which is still not as good as the old nice web console from TSM 5.3 but much better than the ISC versions before.

But now to the topic…

TSMManager is being developed by an ex-IBM employee who thought it might be nice to have a PROPER UI for the TSM server. I personally recommend that tool to everybody who uses TSM or wants to learn using it.
I used the tool to learn how TSM works and due to the ease of use with online help for most of the functions this is much easier than just using the cli.

If you want to know more about it or try it out go to the webpage.

Certus Solutions is an authorized reseller here in NZ & Australia and if you want to buy this tool, just contact us.

IBM TSM 6.2 implementation on AIX 6.1 with IBM Softek replication

Hello out there,

this first Blog entry in our techie Blog covers an implementation sample of TSM 6.2 with the db2 backend (introduced with TSM 6.1) on AIX 6.1 that is asynchronously replicated to a second AIX 6.1 system via IBMs Softek replication software.

In our small computer room here in Auckland (New Zeland) I used the 2 IBM p630โ€™s running AIX 6.1 TL05 as the OS and installed TSM 6.2 on both systems in an active/standby configuration.
The specs of the systems are as follows:

System 1 – replication source
hostname akltsm01
OS level 6100-05
CPU core/speed 2/1453 MHz
Memory 8192 MB
volume groups rootvg, tsmvg
Softek Version 2.6.2
IP address TSM interface 192.168.30.191
IP address replication interface 172.16.55.128
System 2 – replication target
hostname akltsm02
OS level 6100-05
CPU core/speed 1/1200 MHz
Memory 2048 MB
volume groups rootvg, tsmvg
Softek Version 2.6.2
IP address TSM interface 192.168.30.192
IP address replication interface 172.16.55.129

The following File-system structure has been setup on both systems to support the TSM application

Filesystem 512-blocks Free %Used Iused %Iused Mounted on
/dev/hd4 786432 301552 62% 13614 26% /
/dev/hd2 4718592 419256 92% 43559 41% /usr
/dev/hd9var 1048576 526152 50% 4782 8% /var
/dev/hd3 1572864 1556832 2% 119 1% /tmp
/dev/hd1 4456448 4403144 2% 8 1% /home
/dev/hd11admin 262144 261384 1% 5 1% /admin
/proc - - - - - /proc
/dev/hd10opt 262144 58320 78% 2281 24% /opt
/dev/livedump 524288 523552 1% 4 1% /var/adm/ras/livedump
/dev/lg0dtc0 10485760 9393464 11% 43 1% /tsm/akl01_db
/dev/lg0dtc1 40370176 6650584 84% 37 1% /tsm/akl01_actilog
/dev/lg0dtc2 26214400 24724768 6% 34 1% /tsm/akl01_archlog
/dev/lg0dtc4 5242880 1554776 71% 11512 7% /opt/tivoli/tsm
/dev/lg0dtc3 1310720 461288 65% 294 1% /home/tsminst1
/dev/lg0dtc5 13107200 13046688 1% 5 1% /tsm/stg_sequ01
/dev/tsmseq02_lv 13107200 9238160 30% 6 1% /tsm/stg_sequ02

As you can see on the left hand side, all replicated filesystems are located on dtc devices /dev/lgXdtcY where lgX stands for the softek replication group and dtcY for the actual replication device.
The next code sample shows how a dtc device is being configured:

#
PROFILE: 1
REMARK: TSM db
PRIMARY: SYSTEM-A
DTC-DEVICE: /dev/dtc/lg0/rdsk/dtc0
DATA-DISK: /dev/rdb01_lv
SECONDARY: SYSTEM-B
MIRROR-DISK: /dev/dtc/lg501/rdsk/dtc0
MIRROR-DEVNO: 1 46
#

The replication is setup with a symetric configuration, which means that the MIRROR-DISK also is a dtc device on the secondary system. This kind of configuration enables us in a fallback scenario to turn the replication around and replicate back to the original devices on the primary system. This fact reduces the downtime in a fallback scenario, because the application on the secondary system will only be stopped after the replication is in sync. In a non – symmetric configuration the application cannot run until the changes are replicated back from the secondary system to the primary system.

During our presentation event (Certus Enterprise Infrastructure Usergroup in Auckland in August 2010) we where able to successfully demonstrate a failover of the TSM system and started the fall back synchronization.

This setup can be applied to any database application running on AIX, Solaris, HPUX, Linux, zOS or Windows.

If I have the time I will also create a screencast of a failover and publish it here.

If anyone has questions to this setup, feel free to contact us anytime.