ALIX Apache C Debian dev digitalfoo DIY Django Docker Eclipse FreeBSD GIMP git GNOME Google Maps HowTo JavaScript Linux Mac OS X misc MySQL NanoBSD PHP Programming Python REST SEO Soekris web programming wireless

I created a Nagios4 docker repo: lylescott/nagios4

I had a need for a new Nagios install and figured this would be a good project to learn docker with.

The GitHub repo is located at: https://github.com/LyleScott/docker-nagios4
T
he Docker Hub repo is located at: https://registry.hub.docker.com/u/lylescott/nagios4/

Configurable Options

You can customize the following things within your docker file:

ENV NAGIOS_VERSION 4.0.8
ENV NAGIOS_PLUGINS_VERSION 2.0.3
ENV NAGIOS_NRPE_VERSION 2.15

ENV WORK_DIR /tmp

ENV NAGIOS_HOME /opt/nagios
ENV NAGIOS_USER nagios
ENV NAGIOS_GROUP nagios
ENV NAGIOS_CMDGROUP nagioscmd
ENV NAGIOSADMIN_USER nagiosadmin
ENV NAGIOSADMIN_PASS nagios
ENV NAGIOS_TIMEZONE US/Eastern
ENV NAGIOS_WEB_DIR $NAGIOS_HOME/share

ENV APACHE_RUN_USER nagios
ENV APACHE_RUN_GROUP nagios
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_PID_FILE /var/run/apache2.pid
ENV APACHE_RUN_DIR /var/run/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2
ENV APACHE_SERVERNAME localhost
ENV APACHE_SERVERALIAS docker.localhost

Quick Start

You can checkout the image by pulling it from the docker hub.

docker pull lylescott/nagios4
docker run -i -t -p 9443:443 lylescott/nagios

Customize with a Dockerfile

FROM lylescott/nagios4
MAINTAINER Your Name <your@email.com>

USER root

RUN echo > ${NAGIOS_HOME}/etc/docker/www.example.com.cfg

COPY cfg/* ${NAGIOS_HOME}/etc/docker/

An example of sometihng that might be in your cfg directory:

define host {
    # inherited from lylescott/nagios4
    use                             linux-box
    host_name                       www.example.com
    alias                           www.example.com
    address                         12.34.56.78
}

define service {
    # inherited from lylescott/nagios4
    use                             generic-service
    host_name                       www.example.com
    service_description             Host Alive
    check_command                   check-host-alive
}
Getting Front-End Routed URLs with AJAX Content Indexed & Crawler Friendly

My scenario: Over at TideNugget.com, I have a Google Map that contains "markers" that you can click on to bring up a modal that loads location specific tide and weather information. Since the information is only within a modal with dynamically built content via AJAX, rather than a brand new page, all relevant information isn't reabily available to search engine crawlers that just hit the site.

The remedy to this is popular for people that deal with SEO, but it took me a few reads over how google want you to do it to get it right.

Front End Routing

Step one is that you have some front-end routing in place to attach actions to based on the URL's hash. This would normally replace the logic behind the 'click' event on a link or action that loads the content via AJAX with a sinmple window.location.hash change where a routing library then handles your request. In the end, this will give http://yousite/#/some_state some meaning, where some_state gets hooked into your routing library and calls the things that once were fired by manually triggering the event to load the AJAX content.

An example link is http://tidenugget.com/#!/bookmark/tampa-bay-st-petersburg

There are lots of libraries that offer this capability. Google will have more answers on some of the varieties, but I tend to use Finch because it's dead simple and lightweight.

A quick example (CoffeeScript) of how I uses it in the above scenario for those who are curious:

$ ->

    Finch.route '!/bookmark/:placeSlug', ({placeSlug}) ->
        marker = window.markers[placeSlug]
        if not marker
            return

        window.current_marker = marker
        $('#marker-details-modal').modal({show: true})

    Finch.listen()

What ever library you choose to use is irrelevant to the crawler. You just want to have a link you can tell the crawler about that replaces what would normally be loaded via AJAX with a normal request with a static response that has SEO attributes unique to that resource. This way, each link will look like a different resource and search engines will have less of a chance of tagging it as a duplicate of another page (and therefore, no indexed.

Crawlable URLs

Step two is to make these links available to search engines so that it knows which links contained on the site are direct links to AJAX content. This is achieved by the search enginess' crawlers seeing a specific thing on the URL (an exclaimation mark after the '#' in the hash, in this case) to signify that the link is one that loads AJAX content. It then provides a GET argument that we can handle on the server side so that we have a flag to indicate that the request is coming from a crawler and static and SEO optimized content should be uniquely returned for that resource, instead.

So http://yoursite/#myhash becomes http://yoursite/#!myhash.

Now, the crawler will inspect this link. Google and friends then convert the URL http://yoursite/#!myhash to http://yoursite/#!myhash/?_escaped_fragment_= (if you had GET arguments, they will just be appeneded like normal) when it comes time to get crawled. This is so developers have a paramater to use as a flag to known when to generate a static HTML page that is a more crawler friendly version of the page.

For example, I intercept this paramater in my Django view and display a template that is more appropriate for a search engine and not meant to be visited by humans. I try to update as many things that will aid in SEO and helping my content get indexes correctly.

  • page title is updated
  • meta keywords are updated
  • meta description is updated
  • a h1 element is present with a more descriptive title
  • text i want to be indexed is thrown in a p element
  • limited JavaScript and CSS is generated to get response times down

Get Indexed

You can of course submit all of the links in the "webmaster tools" sectiion of your favorite search engines, but this is alot of work and you won't be listed on a lot of tinier search engines.

It's far easier to list them directly in the sitemap for your site so the crawlers will pick them up automatically.

<urlset>  
  ...
  <url>
    <loc>http://tidenugget.com/#!/bookmark/adak-island-adak-bight</loc>
    <changefreq>weekly</changefreq>
    <priority>0.5</priority>
  </url>
  <url>
    <loc>http://tidenugget.com/#!/bookmark/adak-island-adak-island</loc>
    <changefreq>weekly</changefreq>
    <priority>0.5</priority>
  </url>
  ...
</urlset>  

Google took about a week to index all my 4280 links that I submitted this way. They are slowly showing up on other search engines.

Python and the Underscore Prefix

Underscore prefixes in Python provide a way to protect functions, methods, and variables...kinda. In Python, any notion of private variables simply does not exist. There are, though, some pythonic ways to declare that a variable, function, or method shouldn't be consumed outside of where it is being directly used.

Single Underscore

When you prefix something with a singleunderscore, it politely asks developers that interact with that code that the thing being prefixed should not be used in any direct manner other than calling it within the scope for which it was defined in. If you see it in third-party code, it means that you should not use or depend on it in any way.

Note, though, that the thing having a single underscore prefix can still be used as if it didn't have an underscore in the name, so having the prefix underscore is only symbolic and only represents some advice that hopefully people adhere to.

For example,

class FooBar(object):
    foo = 'abc123'
    _bar = 'qwerty'

    def foofunc(self):
        print 'foofunc!'

    def _barfunc(self):
         print 'barfunc!'

The internal representation of the class looks like you think it would, listing the names of the defined variables and methods exactly as they were defined.

print dir(FooBar)
[ ..., '_bar', '_barfunc', ..., 'foo', 'foofunc' ..., ]

Abuse is easy, though. You can still use the single underscore prefixed things with their name like you would any other variable or method. Get why I mentioned it was based off the honor system?

foobar = FooBar()
foobar.foofunc()   # foofunc!
foobar._barfunc()  # barfunc!
print foobar._bar  # qwerty
foobar._bar = 'hello'
print foobar._bar  # hello

This example was with class methods and variables. The same rules apply to a variable or function definition in the module's scope.

Double Underscore

When you prefix something with a double underscore, it sternly implies to developers that interactict with that code that the thing being prefixed should absolutely and positively not be used in any direct manner other than calling it within the scope for which it was defined in.

The thing having a double underscore prefix becomes mangled, meaning that the class's variable or method gets renamed internally to protect the variable from being used directly. Like a single underscore prefix, this protection is only symbolic and is still based on the honor system, thought it is harder to use the variable or method.

Abuse is still possible, especially given that the result of manging always has the same pattern: _TheClassName is internally prefixed to the internal attribute.

For example,

class FooBar(object):
    foo = 'abc123'
    __bar = 'qwerty'

    def foofunc(self):
        print 'foofunc!'

    def __barfunc(self):
         print 'barfunc!'

The variable __bar and __barfunc are both mangled internally.

print dir(FooBar)
[
 ...,
 '_FooBar__bar',
 '_FooBar__barfunc',
 ...
 'foo',
 'foofunc'
 ...,
]

As you can see, __bar and __barfunc are mangled using the FooBar classname. This makes direct access more difficult and deliberate.

foobar = FooBar() 
foobar.__barfunc()         # AttributeError: 'FooBar' object has no attribute '__barfunc
foobar._FooBar__barfunc()  # barfunc!

Though access is possible, it is far from good practice!

Python Class vs Instance Variable

There are a million topics written on this, so I'm not going to delve into gory details. Instead, just checkout the snippet below. I think it says it all.

A class variable is a variable that is shared between all instances of a class. You access it by using the class's name in the dotted reference, rather than self (unless you are in a class method, where the self or cls argument could be used instead). For example, if Car is a class with a variable number_tires and Honda, Jaguar, and VolksWagon were all instances, if number_tires was changed in any of the instances, then the new value would be reflected when I assessed it from Honda, Jaguar, or VolksWagon instances.

An instance variable is scoped to a single instance of a class. Meaning, if Car is a class with a variable number_tires and Honda, Jaguar, and VolksWagon were all instances, if number_tired was changed in the Honda instance, then the new value would ONLY be reflected in the Honda instance and the Jaguar and VolksWagon would be left to what ever value they were.

Accessing a class variable with the self (instance) reference copies that variable into the instance's scope.

class FooBar(object):
    foo = 0
    bar = 0

    def __init__(self):
       FooBar.foo += 1
       self.bar += 1
       self.foo += 1

    def __str__(self):
        return '\n'.join((
            '--------',
            'FooBar.foo \t {} \t (class variable)'.format(FooBar.foo),
            'self.bar \t {} \t (instance variable)'.format(self.bar),
            'self.foo \t {} \t (instance variable)'.format(self.foo),
        ))

As you can see,

  • the class variable was incremented accross all instances
  • the instance variable was only incremented for the single instance that it was being used in
  • (bonus) a class variable used as an instance variable gets copied to the instance's scope and does not effect the class version if you were to alter it
--------
FooBar.foo       1       (class variable)
self.bar         1       (instance variable)
self.foo         2       (instance variable)
--------
FooBar.foo       2       (class variable)
self.bar         1       (instance variable)
self.foo         3       (instance variable)
--------
FooBar.foo       3       (class variable)
self.bar         1       (instance variable)
self.foo         4       (instance variable)
US State Polygons with Events on a Google Map

For a weekend project I was messing with, I needed to have actionalable mouse events for each US state on a Google map. This is how I did it:

US State Polygons

I was looking for ways to draw state borders on a Google map and stumbled accross an XML file with a list of coordinates for each state that represents the coordinates that will outline each state. Using the Google Map API, you can create a Polygon from each set of points for each state. After you create the polygon, you can attach events to each state's polygon.

Data Conversion

Since Google Maps API is in JavaScript, I really wanted the JSON representation of the XML file... not to mention it would be nice to trim the data that isn't needed.

from pprint import pprint
from xml.dom import minidom


def xml2dict(path):
    data = []
    with open(path) as fp:
        xml = ''.join([line.strip() for line in fp.readlines()])
        doc = minidom.parseString(xml).documentElement
        for state in doc.childNodes:
            # str() to avoid the 'u' prefix for unicode strings.
            state_name = str(state.attributes['name'].value)
            points = []
            for point in state.childNodes:
                points.append([
                    float(point.attributes['lat'].value),
                    float(point.attributes['lng'].value),
                ])
            data.append([state_name, points])
    return data


def write_file(path, data):
    with open(path, 'w') as fp:
        fp.write('var stateCoords = ')
        pprint(data, stream=fp)


if __name__ == '__main__':
    xml_file = 'states.xml'
    out_file = 'coords.js'
    write_file(out_file, xml2dict(xml_file))
Get size and row count of MySQL databases and tables

List all databases with row count and size

SELECT
    table_schema `DB`,
    SUM(table_rows) `Row Count`,
    SUM(data_length + index_length)/1024/1024 `Size (MB)`
FROM
    information_schema.tables
GROUP BY
    table_schema
ORDER BY
    table_schema;
+--------------------+-----------+--------------+
| DB                 | Row Count | Size (MB)    |
+--------------------+-----------+--------------+
| information_schema |      NULL |   0.00878906 |
| intellitype        |         2 |   0.01562500 |
| intellitypesite    |        35 |   0.37500000 |
| jobfoo             |      8048 |   3.34375000 |
| mysql              |      2059 |   0.64330387 |
| performance_schema |     23014 |   0.00000000 |
| tidenugget         |   2940747 | 263.10937500 |
+--------------------+-----------+--------------+

Obviously, you can add a WHERE table_schema=`YOUR_DATABASE_NAME` to filter on a database name...

List all tables in database with row count and size

SELECT
    table_name,
    table_rows,
    (data_length + index_length)/1024/1024 `Size (MB)`
FROM
    information_schema.tables
WHERE
    table_schema='tidenugget'
ORDER BY
    table_name;
+----------------------------+------------+--------------+
| table_name                 | table_rows | Size (MB)    |
+----------------------------+------------+--------------+
| auth_group                 |          0 |   0.03125000 |
| auth_group_permissions     |          0 |   0.06250000 |
| auth_permission            |         27 |   0.04687500 |
| auth_user                  |          0 |   0.03125000 |
| auth_user_groups           |          0 |   0.06250000 |
| auth_user_user_permissions |          0 |   0.06250000 |
| django_admin_log           |          0 |   0.04687500 |
| django_content_type        |          9 |   0.03125000 |
| django_session             |          0 |   0.03125000 |
| restapi_place              |       1881 |   0.35937500 |
| restapi_prediction         |    2938332 | 262.28125000 |
| restapi_region             |        279 |   0.06250000 |
+----------------------------+------------+--------------+
Remove a blocked host from fail2ban

See what hosts are being blocked.

l@ln1:~$ sudo iptables -L 
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
fail2ban-ssh-ddos  tcp  --  anywhere             anywhere             multiport dports ssh
fail2ban-ssh  tcp  --  anywhere             anywhere             multiport dports ssh

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

Chain fail2ban-ssh (1 references)
target     prot opt source               destination         
DROP       all  --  188.127.225.85       anywhere            
DROP       all  --  219.138.203.198      anywhere            
DROP       all  --  server77-68-105-205.live-servers.net  anywhere            
DROP       all  --  essen107.server4you.net  anywhere            
RETURN     all  --  anywhere             anywhere            

Chain fail2ban-ssh-ddos (1 references)
target     prot opt source               destination         
RETURN     all  --  anywhere             anywhere 

Note the IP of the host.

l@ln1:~$ sudo iptables -L -n
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
fail2ban-ssh-ddos  tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 22
fail2ban-ssh  tcp  --  0.0.0.0/0            0.0.0.0/0            multiport dports 22

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

Chain fail2ban-ssh (1 references)
target     prot opt source               destination         
DROP       all  --  188.127.225.85       0.0.0.0/0           
DROP       all  --  219.138.203.198      0.0.0.0/0           
DROP       all  --  77.68.105.205        0.0.0.0/0           
DROP       all  --  217.172.182.32       0.0.0.0/0           
RETURN     all  --  0.0.0.0/0            0.0.0.0/0           

Chain fail2ban-ssh-ddos (1 references)
target     prot opt source               destination         
RETURN     all  --  0.0.0.0/0            0.0.0.0/0          

Put the following in a file.

#!/bin/bash

if [ -z $1 ]; then
    echo "USAGE: $0  [<chain, defaults to fail2ban-ssh>]"
    exit 1
fi

host=$1
chain=$2

if [ -z $2 ]; then
    chain="fail2ban-ssh"
fi

sudo iptables -D $chain -s $host -j DROP

Run it.

l@ln1:~$ sh drop_from_fail2ban.sh 
USAGE: sh drop_from_fail2ban.sh <ipaddr> [<chain, defaults to fail2ban-ssh>] l@ln1:~$ sh drop_from_fail2ban.sh 77.68.105.205
Migrate Git repositories to a different host

I wrote the following little script to migrate some of my private git repositories to a different server.

The following assumes you can commit to the new host already. You might have to setup repositories in gitosis/gitolite first if you are using one of those.


First, create a text file containing a list of repositories you want to migrate.

lyle@localhost:/tmp/git$ cat repos.txt 
c__project1
c__progject2
django__project1
python__project1

Create the script to do the work. Or you can download here.

#!/bin/sh                                                                          
# lyle@digitalfoo.net                                                           
                                                                                   
file=$1                                                                            
from=$2                                                                            
to=$3                                                                              
                                                                                   
function usage {                                                                   
    echo "USAGE: $0  <user@fromhost> <user@tohost>"                    
    exit 1                                                                         
}                                                                                  
                                                                                   
if [ -z $file ] || [ -z $from ] || [ -z $to ]; then                                
    usage                                                                          
fi                                                                                 
                                                                                   
for repo in `cat $file`; do                                                        
    git clone $from:$repo                                                          
    cd $repo                                                                       
    git remote set-url origin $to:$repo                                            
    git push origin master                                                         
    cd ..                                                                          
done  

Run the script.

lyle@localhost:~$ mkdir gitmigrate
lyle@localhost:~$ cd gitmigrate
lyle@localhost:~/gitmigrate$ sh git_migrate.sh
USAGE: ./git_migrate.sh <user@fromhost> <user@tohost>

This should pull the repositories from user@fromhost:reponame, reconfigure the push URL to user@tohost:reponame, and push the repository (including previous history) to the new destination.

Useful Python Development Tools

Tools will obviously differ from dev to dev, but the following are a few that I think are very helpful on a general basis.

pip

pip is basically a Python package manager; a tool to install and manage python modules. It is super handy because you can install different versions of modules, easily export/import list of modules, install modules from a git repository, and it generally has more modules than you can find on your package manager for what ever operating system you are on.

# List installed packages and versions.
pip freeze
# Search for python modules to install. pip search <partial_name>
# Install a python module from pip's index. pip install <package_name>
# Install python module(s) and matching versions from a list of packages pip install -r <requirements_file>
# Install from git repository
pip install -e git+https://git.repo/some_pkg.git#egg=SomePackage

# Install from mercurial repository pip install -e hg+https://hg.repo/some_pkg.git#egg=SomePackage 

# install from subversion repository pip install -e svn+svn://svn.repo/some_pkg/trunk/#egg=SomePackage
# Install from git repository using the "feature" branch. pip install -e git+https://git.repo/some_pkg.git@feature#egg=SomePackage
# Loop through all installed packages and upgrade as necessary. pip freeze --local | grep -v '^\-e' | cut -d = -f 1 | xargs pip install -U

virtualenv

virtualenv is a tool that creates a (nearly) standalone environment for a Python project by allowing you to install Python modules into a project specific place.

Install virtualenv through pip, package manager (python-virtualenv on debian based machines), or visit http://www.pip-installer.org/en/latest/.

virtualenv venv
source venv/bin/activate
(venv) pip freeze
wsgiref==0.1.2
(venv) pip install pyflakes
$ pip freeze
pyflakes==0.7.2
wsgiref==0.1.2


Do not forget to source bin/activate inside the virtualenv you want to work in! That sets up the correct PYTHONPATH and things.

bpython

bpython is slated as a fancy Python interpreter... and fancy it is! Most notably, it provides syntax highlighting, auto completion, presents docstrings to autocompletes, allows you to save an interpreter session to a file, and more.

When I need to shoot into an interpreter to test something, this is my goto app.

syntax highlighting

method autocomplete and docstring

module autocomplete

pyflakes

pyflakes is a quick way to scan a Python source file for errors and syntactic problems. It's not as thorough as pylint, but it is much faster and less noisy. I tend to use this while I develop and wait to use pylint until commit time.

pylint

pylint is another source code analyzer that can find many types of source file errors, codng standard violations, is highly configurable via ~/.pylintrc, and provides some handy scoring techniques to give you a general idea of code quality.

line_profiler

line_profiler is a module for doing line-by-line profiling of functions. It also includes kernprof, a convenient script for running either line_profiler or the Python standard library's cProfile or other profile modules.

To use it, install the module (with pip, your package manager, or browse http://pythonhosted.org/line_profiler/), decorate a function or method with @profile, and run kernprof on something that would call the decorated functions/methods.

As a brief example, we have the following code we want to line profile:

# Simple line_profiler example.

@profile
def slow_test():
    foobar = [2] * 1000
    print 'go go go, slow test gadget...'
    for i in range(1000):
        foobar[i] = foobar[i] * foobar[i] 
        for j in range(1000):
            if not i or not j:
                continue
            if i % j == 0 and int(i / j) % 2 == 0 :
                foobar[j] = foobar[i] + foobar[j]

slow_test()
print 'DONE'

Then you execute the Python script with kernprof -l -v <file.py> instead of the usual python program. This actually creates a file.py.lprof file that you can run more stats on.

(venv)lyle@localhost:~/dev/lineprof1$ kernprof.py -l -v lineprof.py
go go go, slow test gadget...
DONE
Wrote profile results to lineprof.py.lprof
Timer unit: 1e-06 s

File: lineprof.py
Function: slow_test at line 2
Total time: 2.203 s

Line #      Hits         Time  Per Hit   % Time  Line Contents
==============================================================
     2                                           @profile
     3                                           def slow_test():
     4         1            9      9.0      0.0      foobar = [2] * 1000
     5         1           22     22.0      0.0      print 'go go go, slow test gadget...'
     6      1001          758      0.8      0.0      for i in range(1000):
     7      1000         1012      1.0      0.0          foobar[i] = foobar[i] * foobar[i] 
     8   1001000       669046      0.7     30.4          for j in range(1000):
     9   1000000       765770      0.8     34.8              if not i or not j:
    10       999          621      0.6      0.0                  continue
    11    998001       762960      0.8     34.6              if i % j == 0 and int(i / j) % 2 == 0 :
    12      3178         2798      0.9      0.1                  foobar[j] = foobar[i] + foobar[j]

pypy

cython

runsnakerun

pdb

From http://docs.python.org/2/library/pdb.html: pdb is an interactive source code debugger for Python programs. It supports setting (conditional) breakpoints and single stepping at the source line level, inspection of stack frames, source code listing, and evaluation of arbitrary Python code in the context of any stack frame. It also supports post-mortem debugging and can be called under program control.

Refer to http://docs.python.org/2/library/pdb.html for neat ways how to manually navigate execution flow. I mainly use it for interrupting program flow to print/troubleshoot variables that are in a desired scope. Most people would just craft a print statement here, but that is a pain and can takes up a lot of time to do this if your program requires lots of time to bootstrap. To do this:

To start debugging at the very beginning of execution, run a Python script invoked with the pdb module:

python -m pdb <file.py>

If you want to drop into the debugger at a certain point in your program, just import pdb into that module and call pdb.set_trace().

import pdb

j = 0
for i in range(10):
    isEven = (i % 2 == 0)
    if isEven:
        j += 1

    if i == 6:
        pdb.set_trace()

print 'DONE'
(venv)lyle@localhost:~/dev/lineprof1$ python foobar.py 
> /Users/lyle/dev/lineprof1/foobar.py(4)()
-> for i in range(10):
(Pdb) print i, j, isEven 6 4 True
(Pdb) where > /Users/lyle/dev/lineprof1/foobar.py(4)() -> for i in range(10):
(Pdb) next > /Users/lyle/dev/lineprof1/foobar.py(5)() -> isEven = bool(i % 2 == 0)
(Pdb) next > /Users/lyle/dev/lineprof1/foobar.py(6)() -> if isEven:
(Pdb) step > /Users/lyle/dev/lineprof1/foobar.py(9)() -> if i == 6:
(Pdb) step > /Users/lyle/dev/lineprof1/foobar.py(4)() -> for i in range(10):
(Pdb) print i, j, isEven 7 4 False
(Pdb) continue DONE

timeit

vim modules

shedskin

Installing 32bit Skype on 64bit Debian Wheezy

I am redoing my work box with Debian Wheezy and, as of today, Skype appears to only offer a 32bit debian package for Debian Wheezy. This is fine, but I had to do a few things from a pretty minimal install to get the system to support the 32bit deb. FYI...

First off, I obviously downloaded the deb package from here.

# dpkg --add-architecture i386
# apt-get update
# dpkg -i ~/path/to/skype-debian_X.X.X.X-X_i386.deb
# apt-get install -f
MySQL Admin Login Recovery in Linux

It happens.

$ sudo /etc/init.d/mysql stop
$ sudo mysqld --skip-grant-tables &
$ mysql -u root -p mysql
mysql> UPDATE user SET password=PASSWORD('NEWPASSWORD') WHERE User='root'; mysql> FLUSH PRIVILEGES; mysql> quit
REST Resources

I've been experimenting with some API's that I have been building just for fun and decided to convert them into RESTful API's. The following are some links and resources that I found helpful along the way.

Some of the resources might have a Python twist to them...

General Resources

Involving Python Things

Developing RESTful Web APIs with Python, Flask and MongoDB

Relevant

Associate the PyDev Eclipse Plugin with .wsgi Files

Assuming you have already installed PyDev, this is an easy task.

  • Go to Preferences in the File menu
  • General --> Editors --> File Associations
  • In the File Types section,
  • Add... --> File type: *.wsgi --> OK
  • select the *.wsgi extension that you creatd in the File Types section.
  • In the Associated editors section
  • Add... --> Python Editor --> OK
  • Add... --> Text Editor --> OK
Digitalfoo.net 2.0 has arrived!

You might not notice it, but this site got a 100% rewrite. I first wrote this as a home-grown CMS when I was learning PHP ages ago. Naturally, it has grown patchy, clunky, and errors have crept in over the source of moving hosts a few times and generally being too busy in life to give it much care. Since things have changed a lot since then and I have taken a recent interest in learning Django, I decided to give it a shot. For what it is worth, I am extremely impressed.

Some content has been removed. Reasons for this include incomplete posts, loose guides I can no longer vouch for being correct, and posts that have become common knowledge or so many people have explained it better than myself that it would be a waste of your time to come here instead of the first few search results on Google. If you want access to an old post, please contact me and let me know. I would be happy to provide it somehow.

If you see any errors or other funkyness, also please let me know. I had some fun developing this site so I might have gotten a little overzealous in places. :)

Enjoy!

Import an Existing git Repository to GitHub

GitHub does not care if you created the repo there or not... after all, it's still plain 'ol git in the backend. That being said, you can easily change the push origin of an existing repository and you will have all of the history and information you are used to.

  1. create a repository on the GitHub website
  2. check out the existing repository that you want to import into GitHub
  3. change the push location
  4. push to the new location

First, go to GitHub and create a blank repository. Then, tell git what you want to do.

$ git clone <some repo>
$ cd <some repo>
$ git remote -v
origin    olduser@oldhost.com:OldRepoName.git (fetch)
origin    olduser@oldhost.com:OldRepoName.git (push)
$ git remote rm origin
$ git remote add origin git@github.com:YourUserName/NewRepoName.git
$ git push -u origin master

Now, you should be able to use git like you normally do... only now you're pushing to GitHub!

Dealing with Dashes in MySQL Database Names

I keep running across this problem and lots of people saying it simply can't be done (because they try to escape the dash or various other tricks that you would think would work simply don't...). The solution is simple: surround the database name with backticks. Use SHIFT and ~ to create the backtick symbol.

Now you should be able to use the database name how ever you would like.

mysql> CREATE DATABASE foo-bar;
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '-bar' at line 1
mysql> CREATE DATABASE `foo-bar`;
mysql> DROP DATABASE `foo-bar`;
Disable Computer, Home, Trash, Network Gnome Desktop Icons in GNOME 2.x

I had to do this when making a Kiosk terminal and sometimes do this so I do not see clutter with transparent terminals.

  1. install gconf-editor
  2. run gconf-editor from the terminal or go to the GNOME menu bar -> Applications -> System Tools -> Configuration Editor

Disable just those icons:

  1. navigate to /apps/nautilus/desktop
  2. uncheck computer_icon_visible
  3. uncheck home_icon_visible
  4. uncheck network_icon_visible
  5. uncheck trash_icon_visible
  6. uncheck

Disable all desktop icons:

  1. navigate to /apps/nautilus/preferences
  2. uncheck show desktop
Creating and Restoring Harddrive Images with dd

Making a bit-by-bit backup of a disk is great when you want to transfer all harddrive data from one disk to another or simply make an exact copy of another disk without worrying around filesystem trickery or permission annoyances. A bit-by-bit backup is also able to extract all of the partition information so that when the backup is restored, the disk partitions should even be identical.

dd is a utility that comes with nearly any Linux/Unix derivative. This tool allows for copying the entire disk to a file, known as an image. Note that dd makes an image of the entire disk. Meaning all bits, even the bits with no data. So if you have a 1TB disk with 4GB of data, the image that dd will create will be 1TB.

Make sure the disk getting imaged is NOT mounted. Check out 'df -h' to see what is mounted. If the image you need is mounted, like the operating system disk would be, boot up a Linux livecd of some sort and do your backup from that.

Create Image

Without Compression

# dd if=/dev/sdX of=/save/path/sdX.img conv=sync,noerror bs=64k

With gzip Compression

# dd if=/dev/sdX conv=sync,noerror bs=64k | gzip -c  > /save/path/sdX.img.gz

Restore Image

Restoring from Raw Image

# dd if=/save/path/sdX.gz of=/dev/sdX
Restoring from gzip Archive
# gzip -dc /save/path/sdX.gz | dd of=/dev/sdX

Extra Information

If you would like to see the progress of the transfer, use the pv utility. It works pretty much like cat as far as output goes, but also displays the file size transferred, progress bar, time remaining, etc. I have discussed it previously in http://digitalfoo.net/posts/2010/06/Progress_Bars_for_piped_Transfers_Using_pv/. You effectively use pv for the input datastream rather than specifying the if= argument to dd.

# pv /dev/sdX | dd of=/save/path/sdX.img conv=sync,noerror bs=64k
or
# pv /dev/sdX | dd conv=sync,noerror bs=64k | gzip -c > sdX.img.gz
Drawing Rectangles on the Screen with Xlib C Library

I needed to get familiar with the Xlib C library and (ugh!) libpng. I put this code together to basically learn how to capture mouse and keyboard events and act accordingly. I wrote a small program to allow the user to draw rectangle outlines on the root window (the application has no background...).

Lines are drawn by saving an origin that corresponds to where the user first clicked to start drawing the shape (ButtonPress) and calculating the width and height based off where the cursor gets dragged to (MouseMotion). Instead of specifying a specific line color, I am simply XOR'ing the pixels that make up the line segments of the rectangles being drawn. This makes the line segments easy to see when drawn on the screen.

Xlib does not really have a refresh or update method like most drawing toolkits. This can cause a situation where items get drawn to the screen but never disappear... even after the program exits. To remedy this when drawing new rectangles, redraw the previous rectangle when drawing a new rectangle. Doing this will re-XOR the screen back to the original pixels.

Read full post...

Convert a PDF to a Series of JPEG Images

I used this to convert PDF images to JPEG images to display them in a slideshow on a application I was working on. It turned out to be pretty useful, though it can be a bit time consuming.

From the man page:
The convert program is a member of the ImageMagick suite of tools. Use it to convert between image formats as well as resize an image, blur, crop, despeckle, dither, draw on, flip, join, re-sample, and much more.

Read the man page for convert (after you install ImageMagick) to learn about a lot of extra arguments you can specify.

-density 150
horizontal and vertical density of the image

-quality 70
% of compression; If it's a photo, use 90-100.

-resize 500x
resize full size PDF image to widthXheight pixels. I didn't specify a height here, so all images just get resized to 500px wide.

Notice the last argument here: foobar.jpg. This will be the basename for the images that get created. In this instance, foo-1.jpg, foo-2.jpg, foo-2.jpg, etc, etc will be generated.

$ convert -density 150 -quality 70 -resize 500x &ltlfilename.pdf> <foo.jpg>
Recreate Gitosis projects.list File

I accidently deleted the projects.list file that is used by Gitosis. To create it, simply list the directories located in the git repositories directory along with the owner's name. In my case, it was super easy because I own all the repositories.

$ cd /path/to/gitosis
$ touch projects.list
$ for DIR in `ls /path/to/git/repositories`; do echo "$DIR Digital+Foo" >> projects.list; done;

Replace Digital+Foo with the project owner's name. Use the plus (+) sign for a space.

Formatting Cell Data from Text to Numbers in OpenOffice Calc

One day, I was copying in data from a CS program of mine and I couldn't figure out why the XY Scatter Plot was so messed up. It turns out that some of my columns I was graphing were being represented as strings and not numbers... even though I has pasted numbers into all the cells. For some reason this is still a pain to fix in Oo, but fortunately there is an easy fix.

This step is optional. Value Highlighting allows you to easily see which cells are being represented as numeric data and which cells are not.

  1. View → Value Highlighting

Now just format the cells to hold a Number. This step actually doesn't give you a number, but rather a value indicating that the cell holds a string. You will see the value prefixed with a ' (single quote) when you select it. Now use regular expressions to change all of those values to the numeric data type.

  1. highlight the cells that you want converted
  2. Format → Cells...
    1. Category → Number → OK
  3. Edit → Find & Replace...
    1. More Options
      1. check Current selection only
      2. check Regular Expressions
    2. Search for: .*
    3. Replace with: &
    4. Replace or Replace/All as necessary
Upgrading a FreeBSD System with csup

/etc/csup.conf

This is the file that tells csup what to do and how to do it. It probably does not exist on your system, so create it and edit in the following:

Notice that I am using RELENG_8! This will sync our new source tree to FreeBSD 8.0-STABLE. If you just want the 8.0-RELEASE tree, change it to RELENG_8_0

RELEASECode that has spent extended periods of time in STABLE and has been problem free in the wild.
STABLECode that has been polished, tested for stability, and in the running to make it into RELEASE. Usually this codebase is quite stable and offers some goodies usually missing in RELEASE.
CURRENTCode that is often a test-bed for new features or current works in progress. Obviously, stability is an issue here, but the FreeBSD team always needs testers to submit bug reports to fix up the codebase.

Update the following snipped to whatever RELENG_X you are aiming for.

# vi /etc/csup.conf
*default host=cvsup.freebsd.org
*default base=/var/db
*default prefix=/usr
*default release=cvs tag=RELENG_9
*delete use-rel-suffix
*default compress<
src-all

Running csup

Now that we have a config for csup, we can sync our source tree with the one specified in /etc/supfile by running csup with a few arguments.

# csup -g -L2 /etc/cvsup.conf
Parsing supfile "/etc/cvsup.conf"
*Connecting to cvsup.freebsd.org
Connected to 72.233.193.64
Server software version: SNAP_16_1h
Negotiating file attribute support
Exchanging collection information
Establishing multiplexed-mode data connection
Running
Updating collection src-all/cvs
... snip ... .... snip ...

Compile World and the Kernel

I would recommend doing an upgrade using the GENERIC kernel for safety reasons, but you can dive right in to using your new kernel if you please. Just replace GENERIC with YOURKERNELNAME that resides in /usr/src/sys/i386/conf.

If you plan to use GENERIC, you can actually leave out KERNCONF=GENERIC. I just put that in there so you can see what is going on and to make it easier to change if you want to use a custom kernel config.

# cd /usr/src
# make clean
# make buildkernel KERNCONF=GENERIC
# make buildworld<
# make installkernel KERNCONF=GENERIC

The worst is over. Before we slip into our new system, we have to boot into single-user mode and finish up the install and merge the new installation files.

Reboot into single user Mode

To get into single-user mode, reboot and press 4 at the boot menu

FreeBSD Boot Menu

In single-user mode, you have to manually set up the FreeBSD boot process to get a usable file system. Accept /bin/sh as your shell and when you get to a prompt do the following to get the system mounted:

# mount -a -t ufs
# swapon -a
# ls /usr/src

Now that we can see /usr/src, we are ready to install the new installation files into their new home.

Running mergemaster may take some patience if you have not upgraded in a while, but pay attention to what you are doing. For example, you certainly do not want to install the new /etc/passwd or /etc/rc.conf!

Here is a short list of some of the config files that you will want to keep (by deleting the temporary). Keep an eye out for others and back up /etc!

  • /etc/passwd
  • /etc/group
  • /etc/hosts
  • /etc/rc.conf
  • /etc/master.passwd
  • /etc/shells
# cd /usr/src
# cp -R /etc /etc.bak
# mergemaster -p
# make installworld
# mergemaster -iFU
# reboot

Reboot and verify that the new kernel version has been updated.

$ uname -a
FreeBSD bakmon.ls.local 8.1-PRERELEASE FreeBSD 8.1-PRERELEASE #0: Sun Jul  4 00:39:14 UTC 2010     root@bakmon.ls.local:/usr/obj/usr/src/sys/GENERIC   amd64
Updating the BIOS on a Soekris Board

Updating the BIOS on a Soekris board is pretty trivial compared to the method needed for an ALIX board. This is a short guide on how to get a Soekris board's BIOS updated via XMODEM transfer.

Read full post...

Progress Bars for piped Transfers Using pv

I stumbled across pv the other day and found it useful. It is not so much a utility as eye-candy, but useful non-the-less. Using pv is analogous to using cat, only with a progress bar and some extra goodies.

from the man page: pv allows a user to see the progress of data through a pipeline, by giving information such as time elapsed, percentage completed (with progress bar), current throughput rate, total data transferred, and ETA.

First, install pv with what ever package manager you use.

Example 1

# pv file.iso | dd of=/dev/cd0 bs=64k

Example 2

Server

# pv file.iso > nc -l 4444

Client

# nc host 444 > file.iso
Get into BIOS of an ALIX Board

Simply press s immediately after boot time (while RAM is counting up).

Boot:  1 PC Engines ALIX.2 v0.99
640 KB Base Memory
261120 KB Extended Memory

01F0 Master 848A SanDisk SDCFB-1024    
Phys C/H/S 1986/16/63 Log C/H/S 993/32/63

BIOS setup:

*9* 9600 baud (2) 19200 baud (3) 38400 baud (5) 57600 baud (1) 115200 bau 
*C* CHS mode (L) LBA mode (W) HDD wait (V) HDD slave (U) UDMA enabl
(M) MFGPT workaround
(P) late PCI init
*R* Serial console enable 
(E) PXE boot enable 
(X) Xmodem upload 
(Q) Qui 

It took me a while to find that important bit of info so I figured I'd note it...

Creating a NanoBSD Access Point (AP) and Router

NanoBSD is an awesome set of scripts contained in the FreeBSD source tree that enables you to easily prepare and install a custom FreeBSD system for an embedded device. It is also highly optimized for Compact Flash media, providing a Read-Only file system and memory disks for the heavily written mount points, namely /etc and /var, to protect against wear-leveling.

I have provided a number of config files for a NanoBSD system that provides various services to get you started on an overlay of custom files that are geared towards making an AP (access point) out of an ALIX2C2 board from http://pcengines.ch (purchased at NetGate [US]), although these files require minor tweaks if you are using another ALIX board or even completely different hardware (mainly just network device names!).

Read full post...

Installing FreeBSD Ports on a NanoBSD Image via chroot
# mkdir /mnt/nanobsd
# mount /dev/da0s1a /mnt/nanobsd
# mkdir /mnt/nanobsd/usr/ports
# mount -t nullfs /usr/ports /mnt/nanobsd/usr/ports
# mount /dev/da0s3 /mnt/nanobsd/cfg
# chroot /mnt/nanobsd
# cd /usr/ports/foo/bar
# make install clean
# mkdir /cfg/local
# cp -R /usr/local/etc/* /cfg/local
# exit
# umount /mnt/nanobsd/usr/ports
# umount /mnt/nanobsd/cfg
# umount /mnt/nanobsd
Mounting (FreeBSD) UFS2 Partition in Linux

I always seem to forget this command. The following mounts the UFS2 FreeBSD filesystem to /mnt/fbsd directory on a Linux box. Notice the read-only permission. Unfortunately, (as of this writing) Linux does not have write support for UFS2. Please let me know if this changes.

Change /dev/sda3 to your disk device!

# mkdir /mnt/fbsd
# mount -t ufs -o ro,ufstype=ufs2 /dev/sda3 /mnt/fbsd
Extract Contents of a RPM

You must install the rpm2cpio package on what ever operating system you are running. The following will extract a rpm hierarchy to the current directory.

$ mkdir ~/extracted_rpm
$ cd ~/extracted_rpm
$ rpm2cpio /path/to/FILENAME.rpm | cpio -div
Loading FreeBSD from grub2

Never edit /boot/grub/grub.cfg directly! You have to make changes in a special file under /etc/grub.d so that your changes will not get overwritten every time you update kernels, etc.

I am using my disk device name here. Make sure you use the one that fits your system.

  • hd0 hard drive number
  • 3 partition of FreeBSD partition (indexed from 1)
  • a slice of /boot partition
# vi /etc/grub.d/40_custom
#!/bin/sh
exec tail -n +3 $0

menuentry "FreeBSD 8.0-RELEASE" {
    insmod ufs2
    set root=(hd0,3,a)
    chainloader +1
}

Run update-grub2 to merge the changes in /etc/grub.d/40_custom. You should also be able to verify that the new entry will be seen next time grub2 is loaded.

# update-grub2
# cat /boot/grub/grub.cfg | grep FreeBSD

Reboot and give it a try!

Remove RC Packages in apt-get Based Systems

First, see what stray packages are on the system so you know what is about to get deleted.

$ dpkg -l | grep ^rc | cut -d ' ' -f3 | less

What just happened there? We listed the packages that are installed with dpkg -l, filtered out results to only show lines starting with rc, and then trimmed the output to the third column which contains only the package names. Piping to less just allows us to easily scroll through the output in the terminal.

Now that you have verified what packages are going to be deleted and taken care of any loose ends, its time to remove the packages.

$ dpkg -l | grep ^rc | cut -d ' ' -f3 | xargs sudo dpkg -P
Connecting FreeBSD to a WPA Wireless Network

This uses the new VAP interface setup that comes with FreeBSD 8.0 and newer.

# vi /etc/rc.conf
--- snip --- snip ---
wlans_ath0="wlan0"
ifconfig_wlan0="WPA DHCP"
# vi /etc/wpa_supplicant.conf
network={
    ssid="ssid_goes_here"
    key_mgmt=WPA-PSK
    psk="password_here"
}
# /etc/rc.d/netif restart

Wait a few seconds for your wireless card to associate with the wireless device and see (1) if you are associated and (2) that you have an IP address.

# ifconfig wlan0

If you have multiple access points around and want a certain one over the other, add priority=1 (you change number), to the host's block in /etc/wpa_supplicant.conf. The lower the number, the higher the priority.

Test IP connectivity to the public Internet.

# ping 4.2.2.1

Test DNS resolution against a public hostname.

# cat /etc/resolv.conf
-- list of nameservers from DHCP lease --
# ping www.google.com

If you happen to not have anything there, you can try using the 4.2.2.1 and 4.2.2.2 nameservers.

# vi /etc/resolv.conf
nameserver 4.2.2.1
nameserver 4.2.2.2
Basic Shell Commands Cheat Sheet (Linux, BSD, etc)

I compiled a list of useful command line utilities for classmates back in college and it seemed to be helpful.

Through this guide, I use <input description> to designate user input. Also, do not forget that you can use relative and absolute paths to files, though this guide usually assumes the files you will be acting on are in current working directory.

Read full post...

chmod Permission Index

I have put together a basic reference of chmod permissions. Enjoy!

For a more helpful shell related things, check out the basic shell commands cheat sheet.

DigitRWXResult
0---no access
1--xexecute
2-w-write
3-wxwrite & execute
4r--read
5r-xread & execute
6rw-read & write
7rwxread, write, & execute

R is read W is write X is execute

http://live.digitalfoo.net/posts/basic-shell-commands-cheat-sheet-linux-bsd-etc

Installing irssi as a Normal (non-privileged) User

I had the need to set up irssi on my University shell account with minimal user access, so I took some notes on what I did. I have found out that a lot of people encounter the missing glib dependency, which is absolutely necessary for irssi to execute properly. Due to the problem's popularity, I included installing glib in this guide.

Read full post...

GIMP: Create PNG Images with Full Transparency Support in ALL Web Browsers

Transparency is a P.I.T.A. if you plan on exposing your image to older web browsers. The transparency in your image might appear to work as planned in the more recent browsers, but the same image can often result in an array of funky colors where the transparency is supposed to be in older browsers.

Using the GIMP, cross-browser transparency support is easily accomplished with a few clicks. The following list of instructions is of what needs to be done to export the image correctly.

  1. Image -> Flatten Image
  2. Layer -> Transparency -> Add Alpha Channel
  3. Select -> By Color OR use the Fuzzy Select Tool with thresholds on the GIMP toolbox
  4. Edit -> Clear or DEL on the keyboard
  5. Image -> Mode -> Indexed...
  6. File -> Save As... -> filename.png
  7. (yes to defaults)

Read full post...

Simple iptables Config for a Linux Gateway

The following allows you to forward (NAT) traffic from an internal interface to an external interface (and back again ;]). In other words, creating a Gateway for a LAN (internal network).

Debian Based (apt-get)

# apt-get install iptables
# vi /etc/network/if-up.d/iptables

RedHat (rpm) Based

# yum install iptables
# vi /etc/sysconfig/iptables
#!/bin/sh

PATH=/usr/sbin:/sbin:/bin:/usr/bin

# user defined
WAN="eth0"
LAN="eth1"

# delete existing rules
iptables -F
iptables -t nat -F
iptables -t mangle -F
iptables -X

# always accept loopback traffic
iptables -A INPUT -i lo -j ACCEPT

# allow established connections, and those not coming from the outside
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -m state --state NEW ! -i $LAN -j ACCEPT
iptables -A FORWARD -i $LAN -o $WAN -m state --state ESTABLISHED,RELATED -j ACCEPT 

# allow outgoing connections from the LAN side
iptables -A FORWARD -i $WAN -o $LAN -j ACCEPT

# masquerade out LAN interface 
iptables -t nat -A POSTROUTING -o $LAN -j MASQUERADE

# do not forward from wan to lan 
iptables -A FORWARD -i $WAN -o $LAN -j REJECT

# enable forwarding packets from interface to interface
echo 1 > /proc/sys/net/ipv4/ip_forward

Debian Based (apt-get)

# chmod +x /etc/network/if-up.d/iptables
# sh /etc/network/if-up.d/iptables

RedHat (rpm) Based

# service iptables restart

Note that this config does not give the ability to provide DHCP or DNS services to LAN clients.

Disable Internal Speaker Beep on FreeBSD

FreeBSD ships with the internal speaker enabled, which can be very annoying when computing in public! I chose to disable the beep by disabling it at the kernel level with sysctl, instead of doing one-off hacks for each application that uses the system bell.

# sysctl hw.syscons.bell=0
hw.syscons.bell: 1 -> 0

Now check to see if the beep is still there. If not, do the following to make the change permenant. If the sound is still there, skip to the next heading to see some other methods of disabling the internal speaker.

# echo 'hw.syscons.bell=0' >> /etc/sysctl.conf

Read full post...

PHP Image Captcha Tutorial

Create a very basic captcha image from raw PHP.

I mainly created this to mess around with neural networks and crackng basic captcha images. I figured someone might want to build off this but I'd personally choose something like reCAPTCHA for and use where you know spammers will be prevelent.

Read full post...

Basic HTTP Authentication with Apache 2

Apache's HTTP Authentication is a fast and easy way to lock down a directory so that it prompts users with a password dialog box to view the files.

This guide assumes that you have Apache2 already up and running.

Read full post...

Disable Linux Internal Speaker Beeps

Most any fresh Linux install, Debian in my specific case, automatically enables a multitude of wonderfulhigh pitched beeps and tones for your listening pleasure. You might have noticed them by hitting a TAB on an invalid auto-complete, when you incorrectly login to GDM, or any of the other seemingly infinite ways to get an ear crunching BEEEEP.

To fix this issue you can go about disabling beeps in individual programs, but I have a better idea! Let's get the job done right and just blacklist the whole internal speaker to get rid of all beeps in all programs. Unless you are listening to motherboard beeps, who really need the internal speaker, anyways?

I use both modules as an example, just note the basic difference is that pcspkr is used in newer kernels. If one command does not work ('Module xxx does not exist' errors, etc), try the other.

Lets try to unload the (possibly) already running module.

# rmmod snd_pcsp
# rmmod pcspkr

Now we just need to make sure the module does not get loaded on system boot.

# echo 'blacklist snd_pcsp' >> /etc/modprobe.d/blacklist
# echo 'blacklist pcspkr' >> /etc/modprobe.d/blacklist
# reboot
Wi-Fi Enabled ALIX Board Hardware Buying Guide

I put together a quick list of parts that I often buy for home and office installs that need beefy WiFi.

I mostly use ALIX (formerly WRAP) embedded boards that pcengines makes. They are 500Mhz, 256MB of ram, and come in a variety of hardware options. A basic kit is less than 190$ (US). The following will provides a basic setup that will suit most smaller installs that need to use Wi-Fi (2.4Ghz in this case). This setup uses somewhat beefy parts. You can take a peak on the other Net Gate pages and downgrade some parts if you do not need a 9 dBi antenna or a sensitive wireless card. If you change wireless cards, make sure you have the right connector on the pigtail! Read the docs.

I am in the US and always use NetGate for all my orders. If you would like to see more official distributors or see a list for specific countries, visit the pcengines order page.

NameDescriptionPrice
alix2D2 kit256MB ram, 500Mhz, 2 LAN, 2 MiniPCI, 512 CF Card$180
Check out the comparison chart for kit options.
Ubiquiti SR2802.11 B/G 400mW miniPCI Card$100
Antenna9 dBi Rubber Duck Omni RP-SMA$18
Pig TailU.FL to RP-SMA Jack Bulkhead Pigtail 8 inch$14
Using a Remote FreeBSD ports Tree

There are some cases where a remote ports tree is a good thing to have around. For example, it can save bandwidth by downloading dist files only once and using them across all clients, when you need to know that all hosts using the tree have the same version of packages (good in development environment or large network), or even when the client doesn't have enough space for ports tree, distfiles, or the compile itself (NanoBSD!)

Read full post...

FreeBSD talking to an (IPSec) Sonicwall VPN
FreeBSD communicating with a SonicWall illustration

This is a basic setup involving a point-to-point IPSec VPN connection between a FreeBSD host and a Sonicwall TZ-170. This guide will probably work for most other versions of FreeBSD as well as other operating systems that use ipsec-tools and racoon.

For this tutorial, the FreeBSD source tree (/usr/src) should be installed. If you do not have it, look on the FTP and download the tree and use the install.sh all script.

Read full post...