• Tag Archives linux
  • Symantec BackupExec 12.5 RALUS on Ubuntu

    Although backup exec 12.5 does not officially support Remote Agent installation on Ubuntu I have had success installing it on my Ubuntu Hardy servers. Just download the agents from Symantec File Connect and extract the contents. Then go to this folder: BEWS_12.5.2213_LINUX-UNIX-MAC-SAP_AGENTS/pkgs/Linux/VRTSralus/ Inside is a file called VRTSralus.tar.gz. Extract the contents of that and you will see a .DEB file that you can install on a debian based system (VRTSralus-12.5.2213-0.i386.deb).

    Install that as root like so…

    dpkg -i VRTSralus-12.5.2213-0.i386.deb

    Once that is installed you will want to patch it. On the BackupExec server go to the Backup Exec installation directory and find this file in the patches folder(C:\Program Files\Symantec\Backup Exec\Agents\RALUS\Linux\). Copy VTRSralusPatch.tar.gz somewhere that you can extract it’s contents and edit the installraluspatch.sh file.

    Look for this line (mine was on line 19).
    if [ `cat /etc/issue | grep Debian | wc -l` = 1 ]

    Change that to this…
    if [ `cat /etc/issue | grep Ubuntu | wc -l` = 1 ]

    …and run the installer.


  • MYSQL Replication: Changing a Slave Database Server to be a Master

    By crmanski

    Senario:
    Our webservers use a database backend which is also replicated to two other servers. In the even that there is a failure then one of the others can take over. oldserver1.berlinschools.org was not doing well and was also the master in a mysql replication setup with two “slave” servers. One older (olderserver2.berlinschools.org) and one basically new (newserver.berlinschools.org). The need was to make it so oldserver1’s master status was passed onto newserver and oldserver2 would look to ne for slave updates.

    Preparation:
    To make this easier on myself first in preparation I made sure that every php/mysql web app that I have running is using a dns name for its mysql server setting (drupal, moodle, gallery, xoops, etc). I choose master.berlinschools.org. To test this name I made sure that the command: dig master.berlinschools.org would respond correctly. I also checked the hosts file for entries. One did have this setting, because it was using an external service provider’s DNS instead of our internal.

    Making it happen:
    I opened 3 separate terminals to each mysql server and logged in as root. Then I logged onto mysql as root with this command…
    mysql -u root -pMyMYSQL-PasswordHERE (Note: no space between -p and the actual password)

    DNS:
    I moved back to my dns configuration and punched in the new ip address for the mysqlmaster dns entry and forced DNS to update. I updated the HOST file on the one that needed it.

    MYSQL:
    On the master first ran…
    FLUSH LOGS;

    On the new master I ran…
    STOP SLAVE;
    RESET MASTER;
    CHANGE MASTER TO MASTER_HOST=’newserversIPaddress’;

    On the slave I ran…
    STOP SLAVE;
    RESET MASTER;
    CHANGE MASTER TO MASTER_HOST=’newserversIPaddress; (current IP of newserver)
    START SLAVE;

    Back to the old master I ran…
    STOP SLAVE;
    RESET MASTER;
    CHANGE MASTER TO MASTER_HOST=’newserversIPaddress’;
    START SLAVE;

    See the MYSQL official replication FAQ for more information:

    http://dev.mysql.com/doc/refman/5.0/en/replication-faq.html


  • Drupal Load Balancing

    By crmanski

    When being responsible for more than one site you being to worry about the time when things go wrong and how to avoid downtime of sites you are in charge of. I read a pretty good article about clustering and scaling a drupal website (http://www.johnandcailin.com/blog/john/scaling-drupal-open-source-infrastructure-high-traffic-drupal-sites) and I have been implementing some of the things that are suggested. I have 3 linux web servers to work with, all of varying age and ability. If one of them goes down, eventually I want the the others to take over.

    Notes on File Syncing

    While going through a few things and reading the comments on the above page I decided to use unision (http://www.cis.upenn.edu/~bcpierce/unison/) to check for changes to the /files directory and sync them between the webhosts. After the initial first run, it is very robust and i will have multiple locations that have an exact copy of the data files. I run two different processes from the same host like this…

    unison -silent -repeat 030 /var/www/drupal/files/ ssh://www1.example.org//var/www/drupal/files/  -log -logfile /var/log/unison/www1.log &
    unison -silent -repeat 030 /var/www/drupal/files/ ssh://ww2.example.org//var/www/drupal/files/ -log -logfile /rvar/log/unison/www2.log &

    These lines above will make unison wait 30 seconds before checking for changes. I just Added these to the /etc/rc.local file so that they will start on boot up and used ssh rsa key authentication to avoid the scripts from asking for a password. (This page covers this pretty well, just adjust it to your environment: http://backuppc.sourceforge.net/faq/ssh.html ) To make sure that the two processes are running on the server I used the webmin Server Status module to add a check that looks for at least two unison processes.

    Notes on load balancing with apache mod_proxy

    There were a few things that I had to figure out on my own with going throught step 2. I was turning a single server/database setup into a front end server. On the front end server make sure to remove the directives or you will not be able to get to the balancer manager ui. (I just commented mine out in case I needed to go back.)

    Since I have serveral virtual hosts on my server. I opted to leave the /etc/apache2/mods-available/proxy.conf file alone. I did not want to change “Deny from all” to “Allow for all” globally. Thus my config in the front end’s virtual host ended up looking like this. I’ll use szone.berlinwall.org as an example…
    (changes in bold)

    <Location /balancer-manager>
        SetHandler balancer-manager
    
        Order Deny,Allow
        Deny from all
        Allow from 127.0.0.1
    </Location>
    
    <Proxy balancer://szonecluster>
        Allow from All  
          # cluster member 1
          BalancerMember http://szone1.berlinwall.org:80 route=szone1
    
          # cluster member 2
          BalancerMember http://szone2.berlinwall.org:80 route=szone2
    </Proxy>
    
    ProxyPass /balancer-manager !
    ProxyPass / balancer://szonecluster/ lbmethod=byrequests stickysession=SZONEID
    ProxyPassReverse / http://szone1.berlinwall.org/
    ProxyPassReverse / http://szone2.berlinwall.org/
    A Note on Drupal configuration.

    In the settings.php on each balancer in addition to setting the variable $cookie_domain = 'szone.berlinwall.org'; I also had to set $base_url = 'http://szone.berlinwall.org'; as well. If I did not then when I used the site search the links to the results had either szone1 or szone2 in them.


  • Symantec Backup Exec Remote Agent 12 on Ubuntu Hardy

    By crmanski

    In a previous post I mentioned how I was able to get backupexec agent running on Ubuntu Feisty. Recently, we updated to verion 12 and I just wanted to share that this method worked well for me in Ubuntu Hardy also. It was a little easier because the debs that were converted with alien just updated the previously installed packages smoothly.


  • Bind9 Error on Debian Etch

    By crmanski

    For those of you running a bind dns server you may have come across this error only when trying to restart your DNS server

    Reloading domain name service… : bindrndc: connect failed: 127.0.0.1#953: connection refused

    This has to do with the configuration of (or lack there of) in /etc/bind/rndc.conf

    I noticed while searching the system for references to rndc that there was a program called: /usr/sbin/rndc-confgen
    Running this gives you the correct text to put in your rdnc.conf file as well as a couple lines to add to your named.conf

    Mine looked something like this…

     /usr/sbin/rndc-confgen
    # Start of rndc.conf
    key "rndc-key" {
        algorithm hmac-md5;
        secret "imnottelling==";
    };
    
    options {
        default-key "rndc-key";
        default-server 127.0.0.1;
        default-port 953;
    };
    # End of rndc.conf
    
    # Use with the following in named.conf, adjusting the allow list as needed:
    # key "rndc-key" {
    #     algorithm hmac-md5;
    #     secret "imnottelling==";
    # };
    #
    # controls {
    #     inet 127.0.0.1 port 953
    #         allow { 127.0.0.1; } keys { "rndc-key"; };
    # };
    # End of named.conf

    Note: Remove the # fromt the config lines for the second part that goes into the /etc/bind/named.conf file.

    After adding these few lines to names.conf and creating rdnc.conf (make sure the user bind can read this file) I was able to restart/reload the bind9 service without error.

    Technology:


  • Symantec Backup Exec Remote Agent 11d on Ubuntu Feisty

    By crmanski

    I found a very helpful article on installing the Remote Agent for Linux on Ubuntu here: http://www.ubuntux.org/ubuntu-veritas-backup-exec-and-you

    Below are the steps I used on Feisty Server. Note: Change to root first: sudo su (as the admin account)

    1. I registered my serial numbers and downloaded the latest .tar.gz for all agents. (called BEWS_11D.7170_LINUX-UNIX-MAC-NT4_AGENTS.2.tar.gz at this time.)

    2. Changed the port that webmin was running on (if you have it installed)

    3. Inside this file I found this folder: “cdimgpkgsLinux”. I zipped that up and uploaded it to a directory in the admin accounts home folder and then extracted VRTSralus.tar.gz and VRTSvxmsa.tar.gz to that folder (~/packages/backupexec/Linux on mine)

    4. Install required packages: apt-get install libstdc++2.10-glibc2.2 libstdc++5 alien

    (Note: I tried this on two servers. One failed and the /var/VRTSralus/beremote.service.log said that it could not find libstdc.so something or other. When comparing the two servers packages libstdc++5 was the only one missing.)

    5. Add users and groups…

    groupadd beoper

    adduser root beoper

    6. cd ~/packages/backupexec/Linux

    7. alien -d *.rpm

    8. dpkg -i *.deb

    9. cp /opt/VRTSralus/bin/VRTSralus.init /etc/init.d/VRTSralus.init

    10. chmod a+x /etc/init.d/VRTSralus.init

    11. /etc/init.d/VRTSralus.init start

    Note: After having trouble on another server (The agent would crash everytime I connected to it with the server to do a backup) I added this to the /etc/services file…

    grfs 6101/tcp # Backup Exec Agent
    ndmp 10000/tcp # Network Data management Protocol (Veritas Backup Exec)
    webmin 10101/tcp

    Thanks to this page on this suggestion…
    http://newvibes.net/index.php/veritas-backup-exec-agent-for-unix-linux-on-debian

    Technology:


  • Setting Global Defaults in Firefox on LSTP

    By crmanski

    First to set the Proxy settings we need to edit the file called all.js in the greprefs folder and find the line that starts with proxy.type. Here is how to do this…

    1. When logged onto the machine as an admin user open the Terminal and type..
    sudo gedit /usr/share/firefox/greprefs/all.js

    This will open the all.js in gedit text editor, search for the string “proxy.type” (Press CTRL+F) and change it to look something like this. Just replace your relevant information…

    pref(“network.proxy.type”, 1);
    pref(“network.proxy.ftp”, “YourIPAddressToProxy”);
    pref(“network.proxy.ftp_port”, 8080);
    pref(“network.proxy.gopher”, “YourIPAddressToProxy”);
    pref(“network.proxy.gopher_port”, 8080);
    pref(“network.proxy.http”, “YourIPAddressToProxy”);
    pref(“network.proxy.http_port”, 0);
    pref(“network.proxy.ssl”, “YourIPAddressToProxy”);
    pref(“network.proxy.ssl_port”, 0);
    pref(“network.proxy.socks”, “YourIPAddressToSocks”);
    pref(“network.proxy.socks_port”, 0);
    pref(“network.proxy.socks_version”, 5);
    pref(“network.proxy.socks_remote_dns”, false);
    pref(“network.proxy.no_proxies_on”, “localhost, 127.0.0.1, .yourlocal.domains”);

    Now to set the Home Page and other preferences.

    To do this we need to edit this file like above in the terminal….

    sudo gedit /etc/firefox/pref/firefox.js

    // disable search suggestions by default. This prevents queries going to the
    // internet for every letter that is typed into the search box.
    pref(“browser.search.suggest.enabled”, false);
    //Setup Home…
    pref(“browser.startup.homepage”, “http://szone.berlinwall.org”);
    pref(“browser.startup.homepage_reset”, “http://szone.berlinwall.org/welcome”);

    Just change the URLs in there to reflect your desired default homepage for any account on the machine.

    Another thing that I like to disable by default is the search suggestion feature. Open up this file and search for “suggest” and change the true to false…

    // Disable search suggestions by default
    pref(“browser.search.suggest.enabled”, false);

    Technology:


  • Application Choices on the LTSP Server

    By crmanski

    ~Application Choices~

    Application choices for LTSP have been a trial and error process. If you do not want your client machines using a certain program the rule of thumb is do not install it on the server. The base install of ubuntu/edubutu comes with Gaim (Instant messanging), Kino (DV video editing) and others applications that would either put a heavy load on the LTSP server of violate your acceptable use policies(AUP).

    Notes on Removing Default Packages:

    When you remove default applications it also removes a meta-package called either “ubuntu-desktop” or “edubuntu-desktop”. this sounds alarming at first but it does not break the system. Later on, If you are going to upgrade to a new release (dist-upgrade) then you will want to re-add this package because the new release will most likely have other needed packages included under this meta-package.

    Removing an Installation Package:

    To remove the applications use Synaptic Package Manager. Right-click on the package and choose “Mark for Complete Removal” and then Apply.

    Technology:


  • Compiling SARG on Ubuntu Dapper

    By crmanski

    The version of sarg (Squid Analysis Report Generator) that comes with Dapper is a few versions behind the actually released version. This is what I did to compile on a Dapper installation that I had previously done no program compilation on…

    1. Install these packages on the command line…

      sudo apt-get install libgdchart-gd2-xpm-dev build-essential

    2. Download sarg from the website: http://sarg.sourceforge.net/sarg.php and unpack it in a directory. I did this in a terminal in my home directory…

      mkdir packages
      cd pack*
      wget
      http://umn.dl.sourceforge.net/sourceforge/sarg/sarg-2.2.3.1.tar.gz
      gzip -cd sarg-2.2.3.1.tar.gz | tar xvf –
      cd sarg*
      sudo su (this makes you root)
      ./configure –enable-sysconfdir=/etc/squid
      make && make install

    3. As root edit the /etc/squid/sarg.conf file to meet your needs…

      sudo gedit /etc/squid/sarg.conf

    4. then run sarg…

    sudo sarg

    Webmin(www.webmin.com) has a module that lets you configure the sarg report generation if you prefer a gui.