Archive | February, 2011

Creating an offline replica of our Windows Environment-Part 2

In part one I setup an offline DC with DHCP, DNS, AND and a Windows 7 VM. Next I wanted to create a copy of our exchange server offline. Easy, right? Just install exchange 2003 with DisasterRecovery mode (setup.exe /DisasterRecovery). Easy.

That is when the fun started.  I brought up one VM with Server 2003 R2 Enterprise, set the server name to the same as the production server EXCHANGESERVER01 (this is all offline so we are okay) and tried installing with the DisasterRecovery switch. Error.:

The component “Microsoft Exchange Messaging and Collaboration Services” cannot be assigned the action “Disaster Recovery” because: – The server object for this server (“EXCHANGESERVER01”) is a clustered Exchange Virtual Server . You may not perform maintenance on this object from a standalone server.  Cluster Admin should be used to perform maintenance on this Exchange Virtual Server.

Crap. Our production Exchange 2003 server is a cluster, and the installer found the AD record (offline) for the mail server. At least we know know what that the correct info is in AD! (note to self, look at EXCHDUMP to find the info that AD has). So I decided to try and trick Exchange 2003 that is being installed on a cluster, but with only one node.

So I added a couple of drives to the 2003 server (and a rename) and changed the ESX SCSI controller to “Virtual”. Tried to boot the VM and . . . Error. Crap.

Power On virtual machine:VMware ESX Server cannot open the virtual disk, “/vmfs/volumes/GUID/OEXCHANGESERVER01/EXCHANGESERVER01-02.vmdk” for clustering.  Please verify that the virtual disk was created using the ‘thick’ option.

Turns out you have to create the VMDK drives images from the command line if you want them to act like a cluster and be shared!

vmkfstools -d eagerzeroedthick -c 1G -a lsilogic EXCHANGESERVER01-02.vmdk

I recreated all my drives, and got the VM to boot. Next, I went into Cluster administrator and setup a simple 1 node cluster. Now it was time to try and install exchange 2003 AGAIN. I fired up Exchange’s setup.exe /DisasterRecovery, AGAIN. Error. Crap.

The component “Microsoft Exchange” cannot be assigned the action “Disaster Recovery” because: – Microsoft Exchange setup does not support the use of the DisasterRecovery action when running on cluster nodes.

Seems like I am in a loop. Can’t install exchange via DisasterRecovery on a cluster, and I can’t install exchange on a single machine that has an AD record of it being a cluster. Crap.

I decided to just go ahead and install exchange normally, no DisasterRecovery switch, on the one node cluster. While I went through the setup, I did some surfing. Turns out what I was trying to do is :

How to Move All Exchange Virtual Servers from a Production Exchange 2003 Cluster to a Standby Exchange 2003 Cluster.

Maybe it is just me, but that title does not sound like what I am trying to do – recover a cluster on a single  node. Anyway, I continued installing exchange 2003 normally, on my offline single node cluster. Applied service pack 2, and then added a couple of roles to the cluster acording to the article.

  1. Created an IP address and Network name on the cluster that matched the production server, and brought them online
  2. Created an Exchange System Attendant resource.

Once I created an Exchange System Attendant resource, the Cluster Administrator tool found the AD records and everything was matched up.  All the Exchange system configuration was there. ESM looked like it did on our production servers. Perfect. I borough one of our Information stores online (no data so it just creates a new one)

Now in theory, I can launch Outlook on my Windows  7 VM, and it should create a new mailbox in the right store (for this test, I did not try to restore our edbs – that is my next post.) .

Worked. Very nice.

 

 

Creating an offline replica of our Windows Environment-Part 1

I am working on my new project, upgrading our Exchange 2003 environment to Exchange 2010. I wanted to create an off line replica of our current environemnet. This is how I set about doing that

  • Before I started, I created a new workstation – Windows 7 on our ESX server. This machine is joined to the domain, but no one has ever logged on (deployed via SCCM OSD). I left it overnight to make sure it was in AD and AD had replicated.
  • Next I created a network in ESX that is not attached to any adapters (and no other machines are connected to) – called “offline network”
  • I then used the built in ESX function of cloning a VM, and created a clone of one of our Domain Controllers (DC).
  • This clone was assigned to the off line network, and the Windows 7 VM was moved to the offline network.
  • Since the DC has AD,DNS, and DHCP all running on it, I should be able to reboot the Windows 7 machine and I should be able to log in with my account.
  • Since I never have logged on to this Windows 7 VM before, I know it is not using a cached version of my account and the Windows 7 VM has to be communicating to the DC.
  • Finally, I seized the FSMO roles from the the other non existant DC (not on the offline network)

That gets me a functioning DC and a functioning DNS with a test Windows 7 machine and Office 2010.

Next up: Recovering an Exchange 2003 cluster to a single machine in an off line network.

Xnest, xdmcp and X11 on CentOS

It has been a while since I have used XNest. It works. Slow, but it works. I can ssh into a box and bring back a full X session back to my Mac.

In CentOS 5.5 I had to edit /etc/gdm/custom.conf and add:

Enable=true under [xdmcp]. Restart X and now I can run:

Xnest :1 -geometry 1024×768 -kb -query localhost

which will bring a gnome session back to my X11 server.

Very easy with out having to open any ports!

GNU date vs BSD date

I usually develop and test my BASH scripts on my mac, mostly for use on RedHat systems. Occasionally I run into problems with this workflow. Recently I realized there was a differnce between the date command on RedHat and the date command in OS X. Turns out BDS date != GNU date. The workaround, install coreutils from Mac Ports, and add this alias to my .bashrc:

alias date=”/opt/local/bin/gdate”

 

Update: gdate is part of the GNU coreutils, and the MacPorts install command for gdate is: sudo port install coreutils

My mac ports cheat sheet

These 3 commands will check for new ports, upgrade outdated ports, and remove older versions.
  • sudo port selfupdate
  • sudo port upgrade outdated
  • sudo port uninstall inactive

To make sure you have a slimmed down install, use port_cutleaves to remove unnecessary ports. There are often “Build Dependencies” (like autoconf, automake, libtool, m4, help2man, p5-locale-gettext) that are no longer needed after a package is installed.

  • sudo port install port_cutleaves
  • sudo port_cutleaves (I run this a couple of times)

Check if a file exists on a remote server

I was working on a copy WordPress site to a local machine script, and I wanted to check if the remote path was a WordPress site.
Here is the code I used.

CHECKCMD="ls /path/to/wp-config.php | grep wp-config.php > /dev/null; echo \$?"
CHECKFILE=$(ssh $SRCSERVER $CHECKCMD)
if [ $CHECKFILE -eq 0 ]; then
	echo "file exists"
fi

Run a task sequence after completed OSD

We wanted to run a Task Sequence after an Operating System Deployment. The OSD image would have only Office,Antivirus, and Windows. The task sequence would have all the other common packages we roll out (Adobe Reader and Flash, Firefox, Quicktime, Java . . .). The problem was that I wanted an easy way to deploy all the packages at once, and an easy way to keep it up to date.

First we created a Collection (that updates every 5 mins) based on the following query:

SELECT SMS_R_System.Name, SMS_G_System_OPERATING_SYSTEM.InstallDate 
FROM SMS_R_System inner join SMS_G_System_OPERATING_SYSTEM on SMS_G_System_OPERATING_SYSTEM.ResourceId = SMS_R_System.ResourceId 
WHERE DATEDIFF(dd,SMS_G_System_OPERATING_SYSTEM.InstallDate,GetDate()) < 2
ORDER BY SMS_G_System_OPERATING_SYSTEM.InstallDate DESC

This query returns all the machines that have had their operating system installed in the last 2 days.

Next we created a Task Sequence that Installed all the packages, and advertised it to the new collection.

Now within a few mins of a machine adding itself to SCCM, it will show up in the collection, and the Task Sequence will be able to be run.

The key was the query to find the machines that have been installed recently. Thanks xrobx99 for your help with this!

Quick check if a mysql database exists

Here is my bash code that checks if a db exists before I try to create one in a script:

$DBNAME="dblookingfor"
DBEXISTS=$(mysql --batch --skip-column-names -e "SHOW DATABASES LIKE '"$DBNAME"';" | grep "$DBNAME" > /dev/null; echo "$?")
if [ $DBEXISTS -eq 0 ];then
	echo "A database with the name $DBNAME already exists. exiting"
	exit;
fi

This will exit out if there is a database with the name you are searching for. The tricky part for me (and always is) was this double quotes inside the single quotes in the LIKE statement.

Rackspace Cloud Files download script

A new(er) tool in the services I use/recommend is Rackspace Cloud servers and Rackspace Cloud Files.

We were evaluating cloud services to host client websites, and I ended up choosing Rackspace’s cloud offerings. I really like the services the provide.

With their Cloud files, I can upload files that can be accessed anywhere. I decided that I wanted to put our common scripts there, that way when we provision a new server, behind a firewall or in the cloud, we can pull from the same place. All I would have to do is keep them up to date in one place.

Before I knew about Chef (future project I can’t wait to have time for), I created simple scripts to install a common set of packages on every server – our SOE (Standard Operatin Environment). Once a server is provisioned, from any other server, we can update the new server to have the same core set of packages and configurations. The most important part of this is that we install GIT and pulldown the python-cloudfiles:

yum install git -y
git clone git://github.com/rackspace/python-cloudfiles.git

Once python-cloudfiles is installed, we use the following script to pull down the common set of scripts:

conn = cloudfiles.get_connection('usename','keynumberthatisreallylong')
cont = conn.get_container(container)
obj = cont.get_objects(path=sourcepath)
for filename in obj:
	print "Downloading " + (os.path.join("/",container,sourcepath,os.path.basename(filename.name))) + " to " + destpath
	filename.save_to_filename(os.path.join(destpath, os.path.basename(filename.name)))
	destfile = os.path.join(destpath, os.path.basename(filename.name))
	timestamp = filename.last_modified[:filename.last_modified.find(".")-3].replace('-','').replace(':','').replace('T','')
	cmd = "touch -m -t " + timestamp + " " + destfile
	os.system(cmd)

What this does is pull down each file in a directory in the Cloud Files infrastructure and saves it locally. Then I added the extra step of setting the modified date to the Cloud Files last_modified date, so that we can tell what downloaded files have been changed recently (uploaded to Rackspace Cloud Files).

I look to replace this with Chef one day, but right now it works really well for us

Waking a sleeping Mac Pro upon opening a folder

Scenario

I have two macs at home, a MacPro and a MacMini. The MacMini is attached to our TV. I put my MacPro to sleep when I leave for work in the morning. My wife comes home and tries to play videos for my son on the Mac Mini. The videos are actually on the Mac Pro, but it is transparent to her, when the machine is on. And that is the problem. When she clicks on the symbolic lynk and the MacPro is sleeping she can’t find the videos she is looking for.

I needed a way to wake the MacPro when she is looking for the videos.

This is longer post describing my whole Wake/Sleep setup. Requirements are MacPorts, and a Wake on LAN (WOL) utility. I use DDWRT, so there is one on my home router.

I am a big fan of MacPorts. I used to use Fink, but I switched, and I don’t remember why. There are two utilities in MacPorts that are useful for sleeping macs, Sleepwatcher and wakeonlan. You could install Sleepwatcher via source, but I prefer a Package management system.

Sleep

Sleepwatcher is the most important part of this system. I used to put my Mac to sleep every night at 11 pm, but if I enabled “Wake for network access” in the energy saver preference, the machine would wake up every two hours. This article describes the problem and a solution – sleepwatcher.

So I installed sleepwatcher via MacPorts. Then I added the following two lines to my /opt/local/etc/rc.sleep ( I could not get it working in my “$home/.sleep” file)

/bin/sleep 1
/usr/sbin/systemsetup -setwakeonnetworkaccess on >/dev/null

Then I added the following to my /opt/local/etc/rc.wakeup (again I could not get my “$home/.wakeup” to work)

/usr/sbin/systemsetup -setwakeonnetworkaccess off >/dev/null

This allows the machine to go to sleep and not wake until it receives a WOL packet.

That takes care of the sleep part.

Wake

Now my machines are sleeping (properly), and they can be woken from a WOL packet. Since I use DDWRT, I can go to the web interface and wake a machine (I have OpenVPN tunnels going all over the place, so i can access the web interface internally). It occurred to me that if there is a web interface, there has to be a WOL executable on the router. With public key authentication, I can connect to my DDWRT router with the following command and wake a machine:

ssh homerouter "/usr/sbin/wol -i 192.168.X.255 xx:xx:xx:xx:xx:xx"

That takes care of the wake part.

Folder Actions

To have a machine wake when I access a folder, I add the following applescript to a “Folder Actions”:

on opening folder this_folder
	try
		tell application "Finder"
			activate
			try
				set ping_result to (do shell script "ping -c 1 machine.trying.towake;echo -n")
				if "100.0% packet loss" is in ping_result then
					do shell script "ssh homerouter "/usr/sbin/wol -i 192.168.X.255 xx:xx:xx:xx:xx:xx" "
				end if
			end try
		end tell
	on error errmsg
	end try
end opening folder

If the machine does not answer a ping, the script will ssh to the ddwrt router and launch the wol executable to wake the sleeping machine.

A complex system, but it works.

Powered by WordPress. Designed by WooThemes