quarta-feira, 8 de agosto de 2012

Creating one script to backup mysql automatically

This tutorial will show you how to write a simple bash shell script which will extract your database schema, compress the data and email you the backup. Utilising cron, this script can be configured to run in the early hours of the morning when your web server is least active.
After completing your database enabled web site, you need an automated method for backing up all that valuable data. Below is a bash shell script which can be used to backup all your clients databases using a nightly cron job.

Bash Shell Script (mysqlbackup)

#!/bin/sh
mysqldump -uroot -ppwd --opt db1 > /sqldata/db1.sql
mysqldump -uroot -ppwd --opt db2 > /sqldata/db2.sql

cd /sqldata/
tar -zcvf sqldata.tgz *.sql
cd /scripts/
perl emailsql.pl
A bash script is a text file containing commands that can be interpreted by the bash shell. Above is a cut down version of the original script which I keep in a directory called /myscripts/. This is important for when we look at adding the script to the cron tab.
The first line of the script tells the operating system (Unix) where to find the bash interpreter, you may need to change this line to work on your systems. The second and third lines call the MySQL utility mysqldump which is used to export the data, the output from this command is then piped into a text file.
For example the first mysqldump statement has 4 parameters passed to it :-
  • -u = your MySQL username. (substitute root with your username)
  • -p = your MySQL password. (substitute pwd with your password)
  • --opt adds the most common and useful command line options, resulting in the quickest possible export. This option automatically includes --add-drop-table, --add-locks, --extended-insert, --quick and --use-locks.
  • the database name to extract. (substitute db1 with your database name)
  • the > /sqldata/db1.sql redirects all the output to a file called db1.sql in a directory /sqldata/. You can create the file in any directory you have rights to, however for consistency I would suggest naming the resulting .sql file the same as the database name.
You simply repeat this process for each database you want to backup. The following line changes directory to the /sqldata/ directory and performs a tar compression adding all the .sql files into one archive file called sqldata.tgz. After changing back to the scripts directory I finally run a Perl script (emailsql.pl) which attaches the sqldata.tgz archive to an email and forwards it to two offsite email accounts. Alternatively you could ftp the sqldata.tgz to an offsite machine.
After creating the script, you need to make it executable by CHMODing the file permissions to 700. At this point you should be able to test the script by entering /myscripts/mysqlbackup from the shell prompt.

The emailsql.pl script.

The example Perl script below shows how you can attach the archive to an email, and send it to your email inbox. This Perl script requires the MIME::Lite Module which you may need install on your server. (How to install Perl Modules).
#!/usr/bin/perl -w
use MIME::Lite;

$msg = MIME::Lite->new(
  From    => mysqlbackup@yoursite.co.uk,
  To      => you@yoursite.co.uk,
  Subject => sqldata.tgz MySQL backup!,
  Type    => text/plain,
  Data    => "Here are the MySQL database backups.");

$msg->attach(Type=>application/x-tar,
             Path =>"/sqldata/sqldata.tgz",
             Filename =>"sqldata.tgz");

$msg->send;

Adding the Script to Cron

Cron is a scheduling tool for Unix, it allows you to specify when a program or script should be run. To edit your current cron table, enter crontab -e from the system prompt. This will load your current cron table into your default text editor, when you save and exit your editor the crontab file will be loaded and ready for use.

0 2 * * * /myscripts/mysqlbackup
0 5 * * 0 /myscripts/reindex
The above example shows my current crontab. The file has two entries, one for each script I wish to run. The first entry tells cron to run the mysqlbackup script every morning at 2am. The second entry runs my search engine indexer every Sunday morning at 5am.
There are five fields for setting the date and time that a program should be run. The five time settings are in the following order.
  • Minutes - in the range of 0 - 59
  • Hour - in the range of 0 - 23
  • Day of month - in the range 1 - 31
  • Month - in the range 1 -12
  • Day of week - in the range 0 - 6 (0 = Sunday)
Any field with a * means run every possible match, so for example a * in the day of month field will run the script every single day of the month at the specified time.

Additional References

domingo, 5 de agosto de 2012

Hash Functions

Precisando de ajuda para criar Hash Functions? Isso é uma tarefa dificil, mas ai vai um website que tem uma excelente coletanea sobre o assunto:
http://orion.lcg.ufrj.br/Dr.Dobbs/books/book5/chap13.htm

Debug C program

#gcc -g -o prog prog.c -lefence
#gdb prog
>run

sexta-feira, 3 de agosto de 2012

Creating bucket based on file disk


This code we can use to create some bucket based on file disk. In case you dont have to much memory ram to take care Hashtable. (-:

int writeFileBucket(unsigned int hash, char *keyword)
{
  FILE *bucketWrite = NULL;
  char temp[30]        = "";
  char *file;
  file                 = malloc(100);
  strcpy(file,"bucket");
  sprintf(temp,"%d",hash);
  strcat(file,temp);
  char buffer[500];
  strcpy(buffer,keyword);
  size_t len = 0;
  //try to open file to append
  bucketWrite = fopen(file,"a");
  if (bucketWrite==NULL)        // some problem to create file on disk system
  {
          printf("\n Proble to create bucketFile: %s",file);
            return -1;
  }
  else {
      strcat(buffer," ");                   
      len = strlen(buffer);                   
      fwrite(buffer, len, 1, bucketWrite);
      fclose(bucketWrite);
      return 0;
  }
  free(file);
}

Export/Import csv files in Postgres

To export:
 
COPY products_273 to '/tmp/products_199.csv' delimiters',';

To import:
COPY products_273 from '/tmp/products_199.csv' delimiters',';

How to use limits.conf on Linux

How to use limits.conf on Linux:

Sometimes you need to increase the open file limit for some application server or the maximum shared member for you ever-growing master database. In such a case you edit your /etc/security/limits.conf and then wonder how to get the changed limits to be visible to check wether you have set them correctly. You do not want to find out that they were wrong after your master DB doesn't come up after some incident in the middle of the night...

The best way is to change the limits and check them by running
ulimit -a

with the affected user.

But often you won't see the changes. The reason: you need to re-login as limits are only applied on login!
But wait: what about users without login? In such a case you login as root (which might not share their limits) and sudo into the user: so no real login as the user. In this case you must ensure to use the "-i" option of sudo:
sudo -i -u <user>

to simulate an initial login with sudo. This will apply the new limits.
A last alternative is modifying the PAM behaviour. On most Linux distributions PAM will load "pam_limit.so" in /etc/pam.d/login which means at login time. By adding this
session    required   pam_limits.so