Task: Remove user1, user2 and user3 ssh keys from multiple machines for both home users and root account,
for ssh protocol 1 as well as protocol 2 files.

To accomplish this task I've used sed and DSH (Dancer's Shell / Distributed Shell) on Debian GNU/Linux.

I've defined the redpool host group:

# cat /etc/dsh/group/redpool

and I've done:

dsh -g redpool "sed -i '/^.*user1\|user2\|user3.*$/d' /{root,home/*}/.ssh/authorized_keys*"

But... you could sue me: "You're guilty of using multiple commands to perform your job."
So, let's ride:

for i in {1..100}; do ssh root@red$i "sed -i '/^.*user1\|user2\|user3.*$/d' /{root,home/*}/.ssh/authorized_keys*"; done

Above option is only usable with hostnames / addresses counted sequentially.
In case of various name schemes it's easier to use the dsh command.


Solaris hostname

Setting the hostname - one of the most basic installation procedure. Unlike other operating systems a few places to visit. A few places to forget at least one of them.

Ex. setting the sunshine hostname on the machine with a bge0 interface and IP address

# echo -e "\tsunshine" >> /etc/inet/hosts
# echo "sunshine" > /etc/hostname.bge0
# echo "sunshine" > /etc/nodename

There's one more file with the hostname and IP address information - /etc/inet/ipnodes. In Solaris 11 it's a symlink to /etc/inet/hosts file. So, you could leave it alone.

The change of the running system's name is as simple as:

# hostname sunshine



Remove asterisks only from the lines beginning with a specified TAG:

sed '/^TAG.*/!p; s/\*//g; n'

turn into mirror

In below example i've used single volume zfs pool to convert it into mirror zpool.
To show the possibilities of zfs in more extensive way it's based on file volumes instead of drives.

# mkfile -nv 1g mud
mud 1073741824 bytes
# zpool create pond /export/mud

I've created the pond pool based on a mud file.
Now, using the attach method of zpool command, I'll turn it into mirror with water (second file volume).

# mkfile -nv 1g water
water 1073741824 bytes
# zpool attach pond /export/mud /export/water
# zpool status pond
pool: pond
state: ONLINE
scrub: resilver completed with 0 errors on Wed Apr 11 09:43:22 2007

pond ONLINE 0 0 0
mirror ONLINE 0 0 0
/export/mud ONLINE 0 0 0
/export/water ONLINE 0 0 0

errors: No known data errors


quick snapshot tour

To take a snapshot of zfs pool pool/aqua you have to run an example command:
# zfs snapshot pool/aqua@cold
where the cold is the name of a newly created shot.

You could find it in (pool/aqua/) .zfs/snapshot directory. By default it's hidden.
You can change this behaviour by setting snapdir variable on pool/aqua to visible:
# zfs set snapdir=visible pool/aqua

To take advantage of snapshots in another way, use it to quickly reproduce the filesystems with clone method.
The clone is functional fs instead of snapshot which cannot be modified (ro mode).
# zfs clone pool/aqua@cold pool/ice

It's a good habit to name the snapshop in a way enabling you to remember the creation time.
In a case of memory leak do the following:
# zfs get creation pool/aqua@cold

To destroy snapshot simply run:
# zfs destroy pool/aqua@cold

VMware command line handling

No more 'VMware Server Console' to power on the virtual system. Sometimes I need only to turn it on from command line over the ssh session. Now, I know how.

Ex. starting Solaris Express 'Nevada' system.

$ vmware-cmd /var/lib/vmware-server/Virtual\ Machines/sol-nv-b69/sol-nv-b69.vmx start

To get state simply run:

$ vmware-cmd /var/lib/vmware-server/Virtual\ Machines/sol-nv-b69/sol-nv-b69.vmx getstate

When I've installed the first Solaris system under VMware I choosed wrong architecture of guest OS. The process of making correct entry is as simple as:

$ cd /var/lib/vmware-server/Virtual\ Machines/sol-nv-b69/
$ sed -i'' -e 's/solaris10/solaris10-64' sol-nv-b69.vmx

Remember to change this value while the virtual is down. After changing it's state to up the 32-bit architecture will be replaced by 64-bit.

Sadly, it's generating the error message (VMware Server 1.0.3 build-44356) and user action is required. You can still pass over the graphical frontend using the answer subcommand.

$ vmware-cmd /var/lib/vmware-server/Virtual\ Machines/sol-nv-b69/sol-nv-b69.vmx start
VMControl error -16: Virtual machine requires user input to continue
$ vmware-cmd /var/lib/vmware-server/Virtual\ Machines/sol-nv-b69/sol-nv-b69.vmx answer

Question (id = 49294347) :The vlance NIC is not supported for 64-bit guests in this release.
Please consult the documentation for the appropriate type of NIC to use with 64-bit guests.
Failed to configure ethernet0.

0) OK
Select choice. Press enter for default <0> :
$ vmware-cmd /var/lib/vmware-server/Virtual\ Machines/sol-nv-b69/sol-nv-b69.vmx getstate
getstate() = off
$ sed -i'' -e 's/solaris10-64/solaris10/' /var/lib/vmware-server/Virtual\ Machines/sol-nv-b69/sol-nv-b69.vmx
$ vmware-cmd /var/lib/vmware-server/Virtual\ Machines/sol-nv-b69/sol-nv-b69.vmx start
start() = 1
$ vmware-cmd /var/lib/vmware-server/Virtual\ Machines/sol-nv-b69/sol-nv-b69.vmx getstate
getstate() = on

I couldn't leave it as it was resolved. A few minutes later the solution has been found - the Source

You are at the crossroad.
You can force the usage of older adapters by setting .vmx configuration options:

ethernet0.allow64bitVlance = "TRUE"
ethernet0.allow64bitVmxnet = "TRUE"

...or take a step into the future. I've choosed that one. To take advantages of 64-bit guest os you have to change the network adapter driver from vlance to e1000. It could be achieved by enforcing the usage of correct driver.

Ex. from my sol-nv-b69.vmx file.:
ethernet0.virtualDev = "e1000"

Remember to update your Solaris network configuration:

* plumb a new interface
$ ifconfig e1000g0 plumb

* add /etc/hostname.e1000g0 file

* replace the file /etc/dhcp.pcn0 with a new one - /etc/dhcp.e1000g0