Finding Duplicates using SQL

While migrating old data to a new database schema I was unable to activate a PRIMARY KEY on the legacy_customer_id field because there were duplicates. It turned out that the old application did not clean the data really well, and it allowed duplicates to be created where one of the customer_ids had a space character making it unique.

I used the following query to test for others:

SELECT
    customer_id,
    COUNT(customer_id)
FROM
    customers
GROUP BY
    customer_id
HAVING
    (COUNT(customer_id) > 1);

This allowed me to find all customer_ids that had duplicates and clean things up.

Linux backup using CRON to local directory

As many have pointed out I am on a backup and disaster recovery kick lately. Some would say that it is about time, others are simply glad to see that data is now being backed up. I have found that it is easiest to zip up files on a local machine prior to moving them to a final destination. So lets get started:

I have multiple Linux servers with many websites on each, as well as database. So I created a script that simply tar’s the files, then gzips them with the date in the filename for archiving.

Here is the file named ‘backupall.sh’ that I save in a place reachable by the user I will use to schedule this cronjob:

#!/bin/sh
date
echo "############### Backing up files on the system... ###############"
 
backupfilename=server_file_backup_`date '+%Y-%m-%d'`
 
echo "----- First do the sql by deleting the old file and dumping the current data -----"
rm -f /tmp/backup.sql
mysqldump --user=mysqluser --password=password --all-databases --add-drop-table > /tmp/backup.sql
 
echo "----- Now tar, then zip up all files to be saved -----"
tar cvf /directory/to/store/file/${backupfilename}.tar /home/* /var/www/html/* /usr/local/svn/* /etc/php.ini /etc/httpd/conf/httpd.conf /tmp/backup.sql /var/trac/*
gzip /directory/to/store/file/${backupfilename}.tar
rm /directory/to/store/file/${backupfilename}.tar
chmod 666 /directory/to/store/file/${backupfilename}.tar.gz
 
echo "############### Completed backing up system... ###############"
date

Continue reading Linux backup using CRON to local directory

Backup Windows Domain Controller using NTBACKUP via cmd

Backing up your servers for disaster recovery these days much include your Domain Controller if you are utilizing a Windows Active Directory to manage your users. To do this is easy using a tool that comes installed on all Windows servers called NTBACKUP. Of course you can launch the GUI by entering NTBACKUP from the run or command line. However, this does not make automated backup work very well. So here is the .bat file that I use to execute it via Windows Scheduled tasks:

@echo off
:: variables
set logfile=D:\backup_log_file.txt
 
echo %Date% # # # Backing up system state containing: local Registry, COM+ Class Registration Database, System Boot Files, Certificates(if certificate server installed), Cluster database(if installed), NTDS.DIT, and SYSVOL folder >> %logfile%
ntbackup backup systemstate /J "System State Backup Job" /F "D:\system_state_backup.bkf" >> %logfile%
 
echo %Date% Backup Completed! >> %logfile%

(NOTE: I am doing this backup via an internal network and using a user account that exists on both systems. Security may dictate that you handle this differently based on your circumstances.)

After the file is executed by Windows Scheduled Tasks you will then be left with a file that is ready to backup somewhere. I do this by making a copy to another server by using the methods covered in a previous blog post at Windows backup bat script using xcopy.

Windows backup bat script using xcopy

Recently I had the need to create a bat script that could be executed by the Windows Scheduled Tasks. The purpose was to copy files from one server to another as a cheap way to backup files created by MSSQL backing up the databases. Here is the .bat file contents (cleaned up to protect sensitive data):

@echo
:: variables
set sourcedrive=D:\
set backupdrive=\\servername\d$
set backupcmd=xcopy /s /c /d /e /h /i /r /y
 
echo # # # Moving files
%backupcmd% "%sourcedrive%\directory_to_backup" "%backupdrive%\directory_to_store_backup"
 
echo # # Moveing Complete!

(NOTE: I am doing this backup via an internal network and using a user account that exists on both systems. Security may dictate that you handle this differently based on your circumstances.)

Notice that for the backupdrive I am calling another Windows server and using the d$. This would require that the Windows Scheduled Task be executed using a user that is trusted on both machines. Also you could specify a local directory on the same server if you did not need to copy the files to another server.
Continue reading Windows backup bat script using xcopy

Get SSL running on Apache (CentOS)

I was playing with a new virtual server that had CentOS installed on it recently, and wanted to get SSL working for Apache.  Since I was only setting up a development server I really didn’t need to purchase a certificate and decided to use a self-signed certificate.  Here is what I did:

First I needed to get ‘make’ and ‘mod_ssl’ running to allow for this. (I use sudo but you could login as su)

sudo yum install make
sudo yum install mod_ssl

Next I did the following steps:

  1. Go to /etc/pki/tls/certs
  2. Run the command sudo make mycert.pem
  3. Enter the information you are prompted for about country, state, city, host name etc, your certificate and key has been created
  4. Now edit /etc/httpd/conf.d/ssl.conf and update the following items:
    • SSLCACertificateFile /etc/pki/tls/certs/mycert.pem
    • SSLCACertificateKeyFile /etc/pki/tls/mycert.pem
  5. I was forced to create a symbolic link for the SSLCACertificateKeyFile as follows: (I think this was supposed to happen automagically.)
    • I went to /etc/pki/tls and created the symbolic link using the next line.
    • sudo ln -s certs/mycert.pem mycert.pem
  6. Restart Apache (/etc/init.d/httpd restart)

There, now you have a self-signed certificate for your apache virtualhosts.