• Category Archives Uncategorized
  • Can the Conflicker Worm be removed?-2

    By crmanski

    Over the last few months I have seen and heard or quite a few computers infected with the “Conficker” worm ( also know as Downup, Downadup and Kido). It is truly a nasty piece of software that ingrains itself so far into the system that it has been impossible to remove from any of the systems that I had not personally secured myself with the steps I outline here. That one machine the person had actually clicked on the “yes” button to install the worm. Luckily it was with the non-administrative account and I was able isolate the files and remove them.

    I did some research around the web on so called removal instructions and came across Microsofts…

    microsoft.com – “How do I remove the Conficker worm? If your computer is infected with the Conficker worm, you may be unable to download certain security products, such as the Microsoft Malicious Software Removal Tool or accessing certain Web sites, such as Microsoft Update. If you can’t access those tools, try using the Windows Live OneCare Safety Scanner.”

    This is totally useless. I’ve spent hours using a manually downloaded MSRT and the Live OneCare Scanner. It might tell you that it cleaned the machine and the everything will look good, but after a couple reboots and even opening Internet Explorer once will bring it right back. You might be able to “remove” it, but the problem of Windows XP or Vista setting up all home user account as administrators of the local machine still exists. The point of infection is still available and any account that uses the computer can infect the system. Try to remove this infection is an exercise in futility.

    If you have been infected I think it is best if you backup your documents and re-install the Operating system from scratch (Don’t forget to scan them with an update Antivirus before putting them back on your computer). This way you can be truly sure that there is no infection. It is not wise to trust a machine that has had such a bad infection. This is especially true for someone who does online banking or conducts other business transactions on the Internet. It is better to start fresh and make sure that the system is secure. Follow the steps on the checklist. If you cannot, then find someone that can.

    Technology:


  • Drupal Load Balancing-2

    By crmanski

    When being responsible for more than one site you being to worry about the time when things go wrong and how to avoid downtime of sites you are in charge of. I read a pretty good article about clustering and scaling a drupal website (http://www.johnandcailin.com/blog/john/scaling-drupal-open-source-infrastructure-high-traffic-drupal-sites) and I have been implementing some of the things that are suggested. I have 3 linux web servers to work with, all of varying age and ability. If one of them goes down, eventually I want the the others to take over.

    Notes on File Syncing

    While going through a few things and reading the comments on the above page I decided to use unision (http://www.cis.upenn.edu/~bcpierce/unison/) to check for changes to the /files directory and sync them between the webhosts. After the initial first run, it is very robust and i will have multiple locations that have an exact copy of the data files. I run two different processes from the same host like this…

    unison -silent -repeat 030 /var/www/drupal/files/ ssh://www1.example.org//var/www/drupal/files/  -log -logfile /var/log/unison/www1.log &
    unison -silent -repeat 030 /var/www/drupal/files/ ssh://ww2.example.org//var/www/drupal/files/ -log -logfile /rvar/log/unison/www2.log &

    These lines above will make unison wait 30 seconds before checking for changes. I just Added these to the /etc/rc.local file so that they will start on boot up and used ssh rsa key authentication to avoid the scripts from asking for a password. (This page covers this pretty well, just adjust it to your environment: http://backuppc.sourceforge.net/faq/ssh.html ) To make sure that the two processes are running on the server I used the webmin Server Status module to add a check that looks for at least two unison processes.

    Notes on load balancing with apache mod_proxy

    There were a few things that I had to figure out on my own with going throught step 2. I was turning a single server/database setup into a front end server. On the front end server make sure to remove the directives or you will not be able to get to the balancer manager ui. (I just commented mine out in case I needed to go back.)

    Since I have serveral virtual hosts on my server. I opted to leave the /etc/apache2/mods-available/proxy.conf file alone. I did not want to change “Deny from all” to “Allow for all” globally. Thus my config in the front end’s virtual host ended up looking like this. I’ll use szone.berlinwall.org as an example…
    (changes in bold)

    <Location /balancer-manager>
        SetHandler balancer-manager
    
        Order Deny,Allow
        Deny from all
        Allow from 127.0.0.1
    </Location>
    
    <Proxy balancer://szonecluster>
        Allow from All  
          # cluster member 1
          BalancerMember http://szone1.berlinwall.org:80 route=szone1
    
          # cluster member 2
          BalancerMember http://szone2.berlinwall.org:80 route=szone2
    </Proxy>
    
    ProxyPass /balancer-manager !
    ProxyPass / balancer://szonecluster/ lbmethod=byrequests stickysession=SZONEID
    ProxyPassReverse / http://szone1.berlinwall.org/
    ProxyPassReverse / http://szone2.berlinwall.org/
    A Note on Drupal configuration.

    In the settings.php on each balancer in addition to setting the variable $cookie_domain = 'szone.berlinwall.org'; I also had to set $base_url = 'http://szone.berlinwall.org'; as well. If I did not then when I used the site search the links to the results had either szone1 or szone2 in them.


  • Drupal Load Balancing

    By crmanski

    When being responsible for more than one site you being to worry about the time when things go wrong and how to avoid downtime of sites you are in charge of. I read a pretty good article about clustering and scaling a drupal website (http://www.johnandcailin.com/blog/john/scaling-drupal-open-source-infrastructure-high-traffic-drupal-sites) and I have been implementing some of the things that are suggested. I have 3 linux web servers to work with, all of varying age and ability. If one of them goes down, eventually I want the the others to take over.

    Notes on File Syncing

    While going through a few things and reading the comments on the above page I decided to use unision (http://www.cis.upenn.edu/~bcpierce/unison/) to check for changes to the /files directory and sync them between the webhosts. After the initial first run, it is very robust and i will have multiple locations that have an exact copy of the data files. I run two different processes from the same host like this…

    unison -silent -repeat 030 /var/www/drupal/files/ ssh://www1.example.org//var/www/drupal/files/  -log -logfile /var/log/unison/www1.log &
    unison -silent -repeat 030 /var/www/drupal/files/ ssh://ww2.example.org//var/www/drupal/files/ -log -logfile /rvar/log/unison/www2.log &

    These lines above will make unison wait 30 seconds before checking for changes. I just Added these to the /etc/rc.local file so that they will start on boot up and used ssh rsa key authentication to avoid the scripts from asking for a password. (This page covers this pretty well, just adjust it to your environment: http://backuppc.sourceforge.net/faq/ssh.html ) To make sure that the two processes are running on the server I used the webmin Server Status module to add a check that looks for at least two unison processes.

    Notes on load balancing with apache mod_proxy

    There were a few things that I had to figure out on my own with going throught step 2. I was turning a single server/database setup into a front end server. On the front end server make sure to remove the directives or you will not be able to get to the balancer manager ui. (I just commented mine out in case I needed to go back.)

    Since I have serveral virtual hosts on my server. I opted to leave the /etc/apache2/mods-available/proxy.conf file alone. I did not want to change “Deny from all” to “Allow for all” globally. Thus my config in the front end’s virtual host ended up looking like this. I’ll use szone.berlinwall.org as an example…
    (changes in bold)

    <Location /balancer-manager>
        SetHandler balancer-manager
    
        Order Deny,Allow
        Deny from all
        Allow from 127.0.0.1
    </Location>
    
    <Proxy balancer://szonecluster>
        Allow from All  
          # cluster member 1
          BalancerMember http://szone1.berlinwall.org:80 route=szone1
    
          # cluster member 2
          BalancerMember http://szone2.berlinwall.org:80 route=szone2
    </Proxy>
    
    ProxyPass /balancer-manager !
    ProxyPass / balancer://szonecluster/ lbmethod=byrequests stickysession=SZONEID
    ProxyPassReverse / http://szone1.berlinwall.org/
    ProxyPassReverse / http://szone2.berlinwall.org/
    A Note on Drupal configuration.

    In the settings.php on each balancer in addition to setting the variable $cookie_domain = 'szone.berlinwall.org'; I also had to set $base_url = 'http://szone.berlinwall.org'; as well. If I did not then when I used the site search the links to the results had either szone1 or szone2 in them.


  • Symantec Backup Exec Remote Agent 12 on Ubuntu Hardy-2

    By crmanski

    In a previous post I mentioned how I was able to get backupexec agent running on Ubuntu Feisty. Recently, we updated to verion 12 and I just wanted to share that this method worked well for me in Ubuntu Hardy also. It was a little easier because the debs that were converted with alien just updated the previously installed packages smoothly.


  • Symantec Backup Exec Remote Agent 12 on Ubuntu Hardy

    By crmanski

    In a previous post I mentioned how I was able to get backupexec agent running on Ubuntu Feisty. Recently, we updated to verion 12 and I just wanted to share that this method worked well for me in Ubuntu Hardy also. It was a little easier because the debs that were converted with alien just updated the previously installed packages smoothly.


  • Bind9 Error on Debian Etch-2

    By crmanski

    For those of you running a bind dns server you may have come across this error only when trying to restart your DNS server

    Reloading domain name service… : bindrndc: connect failed: 127.0.0.1#953: connection refused

    This has to do with the configuration of (or lack there of) in /etc/bind/rndc.conf

    I noticed while searching the system for references to rndc that there was a program called: /usr/sbin/rndc-confgen
    Running this gives you the correct text to put in your rdnc.conf file as well as a couple lines to add to your named.conf

    Mine looked something like this…

     /usr/sbin/rndc-confgen
    # Start of rndc.conf
    key "rndc-key" {
        algorithm hmac-md5;
        secret "imnottelling==";
    };
    
    options {
        default-key "rndc-key";
        default-server 127.0.0.1;
        default-port 953;
    };
    # End of rndc.conf
    
    # Use with the following in named.conf, adjusting the allow list as needed:
    # key "rndc-key" {
    #     algorithm hmac-md5;
    #     secret "imnottelling==";
    # };
    #
    # controls {
    #     inet 127.0.0.1 port 953
    #         allow { 127.0.0.1; } keys { "rndc-key"; };
    # };
    # End of named.conf

    Note: Remove the # fromt the config lines for the second part that goes into the /etc/bind/named.conf file.

    After adding these few lines to names.conf and creating rdnc.conf (make sure the user bind can read this file) I was able to restart/reload the bind9 service without error.

    Technology:


  • Bind9 Error on Debian Etch

    By crmanski

    For those of you running a bind dns server you may have come across this error only when trying to restart your DNS server

    Reloading domain name service… : bindrndc: connect failed: 127.0.0.1#953: connection refused

    This has to do with the configuration of (or lack there of) in /etc/bind/rndc.conf

    I noticed while searching the system for references to rndc that there was a program called: /usr/sbin/rndc-confgen
    Running this gives you the correct text to put in your rdnc.conf file as well as a couple lines to add to your named.conf

    Mine looked something like this…

     /usr/sbin/rndc-confgen
    # Start of rndc.conf
    key "rndc-key" {
        algorithm hmac-md5;
        secret "imnottelling==";
    };
    
    options {
        default-key "rndc-key";
        default-server 127.0.0.1;
        default-port 953;
    };
    # End of rndc.conf
    
    # Use with the following in named.conf, adjusting the allow list as needed:
    # key "rndc-key" {
    #     algorithm hmac-md5;
    #     secret "imnottelling==";
    # };
    #
    # controls {
    #     inet 127.0.0.1 port 953
    #         allow { 127.0.0.1; } keys { "rndc-key"; };
    # };
    # End of named.conf

    Note: Remove the # fromt the config lines for the second part that goes into the /etc/bind/named.conf file.

    After adding these few lines to names.conf and creating rdnc.conf (make sure the user bind can read this file) I was able to restart/reload the bind9 service without error.

    Technology:


  • Gubble – An Interesting Parental Control Firefox Extension-2

    By crmanski

    I just came across this very neat little Firefox extension that provides parental controls for the Firefox web browser. It is called Gubble (www.gubble.com). Once it is installed and you restart Firefox it will prompt you to setup a parental account and then child accounts. The extension takes control of the browser and works on a whitelist philosophy. Everything is blocked except what you allow for the child accounts. The parents account has a password and will let you go wherever you need, but the children can only access the sites you approve. Children can try to go to a site and Gubble will ask them if they want to request access to the site. which will leave a message for the parent account that they will see the next time that they logon. Overall this looks good for parents that want to let their younger children (Elementary/Middle School) explore the internet without worrying as much about the “darker side” of the internet.


  • Gubble – An Interesting Parental Control Firefox Extension

    By crmanski

    I just came across this very neat little Firefox extension that provides parental controls for the Firefox web browser. It is called Gubble (www.gubble.com). Once it is installed and you restart Firefox it will prompt you to setup a parental account and then child accounts. The extension takes control of the browser and works on a whitelist philosophy. Everything is blocked except what you allow for the child accounts. The parents account has a password and will let you go wherever you need, but the children can only access the sites you approve. Children can try to go to a site and Gubble will ask them if they want to request access to the site. which will leave a message for the parent account that they will see the next time that they logon. Overall this looks good for parents that want to let their younger children (Elementary/Middle School) explore the internet without worrying as much about the “darker side” of the internet.


  • Symantec Backup Exec Remote Agent 11d on Ubuntu Feisty-2

    By crmanski

    I found a very helpful article on installing the Remote Agent for Linux on Ubuntu here: http://www.ubuntux.org/ubuntu-veritas-backup-exec-and-you

    Below are the steps I used on Feisty Server. Note: Change to root first: sudo su (as the admin account)

    1. I registered my serial numbers and downloaded the latest .tar.gz for all agents. (called BEWS_11D.7170_LINUX-UNIX-MAC-NT4_AGENTS.2.tar.gz at this time.)

    2. Changed the port that webmin was running on (if you have it installed)

    3. Inside this file I found this folder: “cdimgpkgsLinux”. I zipped that up and uploaded it to a directory in the admin accounts home folder and then extracted VRTSralus.tar.gz and VRTSvxmsa.tar.gz to that folder (~/packages/backupexec/Linux on mine)

    4. Install required packages: apt-get install libstdc++2.10-glibc2.2 libstdc++5 alien

    (Note: I tried this on two servers. One failed and the /var/VRTSralus/beremote.service.log said that it could not find libstdc.so something or other. When comparing the two servers packages libstdc++5 was the only one missing.)

    5. Add users and groups…

    groupadd beoper

    adduser root beoper

    6. cd ~/packages/backupexec/Linux

    7. alien -d *.rpm

    8. dpkg -i *.deb

    9. cp /opt/VRTSralus/bin/VRTSralus.init /etc/init.d/VRTSralus.init

    10. chmod a+x /etc/init.d/VRTSralus.init

    11. /etc/init.d/VRTSralus.init start

    Note: After having trouble on another server (The agent would crash everytime I connected to it with the server to do a backup) I added this to the /etc/services file…

    grfs 6101/tcp # Backup Exec Agent
    ndmp 10000/tcp # Network Data management Protocol (Veritas Backup Exec)
    webmin 10101/tcp

    Thanks to this page on this suggestion…
    http://newvibes.net/index.php/veritas-backup-exec-agent-for-unix-linux-on-debian

    Technology: