I am releasing a project I have been working on for a little while: a simple HTTP MJPEG streaming server called Hawkeye. The details can be found on its dedicated page.
How to read descriptions of software libraries
There is a certain language we use when describing software libraries. I find that often times there are kernels of truth behind it that are not apparent on the surface. Here is my glossary:
Simple - the project is incomplete. It is likely a clone of a more feature rich project that the author did not figure out.
Easy to use - will break the API in the next release
Featureful - bloated beyond belief
Stable - stale. It is likely that the project was abandoned or has a single maintainer that lost faith in it.
Powerful - has a complex API that only helps you in one specific use case. Prepare to write a wrapper around it
Advanced - requires intimate understanding of the algorithms used
Cross-platform - 90% cross platform. Get ready to write your own code to address the other 10%
Small - proof of concept not ready for prime time
Weekend project - a three week effort with glaring issues
Fast - is barley usable for running benchmarks
Lightning fast - hello world
Scalable - see "fast"
Smart - has lots of magic you won't understand
Complete solution - "do as we say, or you will suffer"
Engine agnostic - there is one correct engine and a dozen half-baked unsupported ones
Does one thing well - does one thing. There is probably a command line tool that does this better.
Fast growing - will break the API and the fundamental paradigm of the project with the next release
Well documented - documentation is either out of date or the project is abandoned. See also "stable"
Promsing - no documentation
Like X but for Y - bad port of a popular solution to a domain where it is not applicable
Web scale - will lose your data when you turn away
Like MySQL only 100x faster - a memcached clone
NoSQL - a vast array of different data stores from simple key-value in-memory solutions, to complex distributed batch processing systems
Demonstrates the power of underlying technology X - first project with technology X
Industry standard - obsolete. There are likely better solutions.
Obviously this is somewhat tongue-in-cheek. There is a lot of great software out there and not all of it fits these stereotypes. However, next time you are about to describe a library as "a smart, fast scalable X for Y" do the translation in your head first and have a self-aware chuckle first.
TI MSP430 LaunchPad links roundup
Lately I've been playing with the TI MSP430 LaunchPad. This post is a collection of links to resources that were helpful to me for getting started.
- Texas Instruments [overview of the various development kits/evaluation boards][]. A number of official BoosterPacks is listed on this site as well.
- The MSP430 LaunchPad can be bought directly from [Texas Instruments for $9.99][]. The price has gone up on 2013-03-01 from the original $4.30.
- NJC's MSP430 LaunchPad Blog contains a number of useful and accessible tutorials
- Four-Three-Oh! is a large community of hackers and tinkerers that is focused on the LauchPad. Much of the development surrounding the LauchPad happens here.
- Newark is a large supplier of both the LaunchPad boards and the microcontrollers themselves.
- Texas Instruments has a samples program where you can request a few additional microcontrollers at no charge
If you happen to use Ubuntu and want to write and compile C directly, here's how you would do that:
Step 1: Install necessary packages via apt-get:
sudo apt-get install msp430-libc mspdebug msp430mcu binutils-msp430 gcc-msp430 gdb-msp430
Step 2: Write your code using your favorite editor/environment.
Step 3: Compile the code using the msp430-gcc executable:
$ msp430-gcc -o main.elf *.c
Step 4: Flash the code onto the microcontroller:
mspdebug rf2500 'prog main.elf'
You need IPv6 now and here's how to get it
The Internet is dead! Long live the Internet (v6)! You need IPv6. I need you to have IPv6 so you can view this website over the next generation Internet Protocol. If you and I both had IPv6, we would be able to forget about such inconveniences as NAT. We could video chat without having to have a separate server. We could share files directly. We could do a whole lot of really cool stuff.
Now, the question is: how do you get IPv6? Here is one way using SixXS and a Raspberry Pi.
This is my current method, since it is low cost and requires no special router setup. Basically, IPv6 packets are encapsulated into IPv4+UDP via the Anything-in-Anything protocol. UDP traverses NAT boundaries fairly easily and SixXS provides a very nice service so that you don't have to manually tell them that your public IP has changed. Using this setup, I've basically created a generic IPv6 tunnel endpoint and router that I can connect to almost any LAN and it would automagically enable IPv6 on that network. Let me show you how:
Step 1. Obtain a Raspberry Pi and install Linux on it. This is beyond the scope of this post, and documented well elsewhere. You can also use any other always-on device on your network, but I will assume you will get a Raspberry Pi for the purpose here.
Step 2. Get an account with SixXS. This is a multi-step process where some steps require manual approval, but it goes pretty quickly. Once you have your account, request a tunnel and a subnet. For the reason, state something like "I want to get my local network IPv6 enabled", but with more detail. Make sure to select the AYIYA type of tunnel.
Step 3. Set your Raspberry Pi as a router:
echo "net.ipv6.conf.all.forwarding=1" | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
Step 4. Set up your firewall:
ip6tables -A INPUT -i lo -j ACCEPT
ip6tables -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
ip6tables -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
ip6tables -A INPUT -s 2001:4830:xxxx:xxx::/64 -j ACCEPT
ip6tables -A INPUT -s 2001:4830:xxxx:Yxxx::/64 -j ACCEPT
ip6tables -A INPUT -p ipv6-icmp -j ACCEPT
ip6tables -A INPUT -j DROP
ip6tables -A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
ip6tables -A FORWARD -p tcp -m tcp --dport 22 -j ACCEPT
ip6tables -A FORWARD -s 2001:4830:xxxx:Yxxx::/64 -j ACCEPT
ip6tables -A FORWARD -p ipv6-icmp -j ACCEPT
ip6tables -A INPUT -j DROP
Note that we are letting two IPv6 subnets through: 2001:4830:xxxx:xxx::/64 and 2001:4830:xxxx:Yxxx::/64. The one with the Yxxx is going to be the routed subnet. That's the one that the rest of the devices on your network will use. The one with just the xxx will only have two addresses on it: ::1 (the remote end of your tunnel) and ::2 (your Raspberry Pi).
Step 5. Make sure your firewall is enabled at boot time. This is easy:
Put the following into /etc/network/if-pre-up.d/ip6tables-load, and make
it executable
($ sudo chmod 755 /etc/network/if-pre-up.d/ip6tables-load
)
#!/bin/sh
ip6tables-restore < /etc/ip6tables.rules
exit 0
Now, put the following into /etc/network/if-post-down.d/ip6tables-save
and make it executable
($ sudo chmod 755 /etc/network/if-post-down.d/ip6tables-save
)
#!/bin/sh
ip6tables-save -c > /etc/ip6tables.rules
if [ -f /etc/ip6tables.downrules ]; then
ip6tables-restore < /etc/ip6tables.downrules
fi
exit 0
For good measure, execute
$ sudo /etc/network/if-post-down.d/ip6tables-save
Step 6. Now that you are firewalled off, let's bring up the IPv6 tunnel. All this takes is:
sudo apt-get install aiccu
Answer the questions about your login and password, then let the install finish. Check that you have IPv6 connectivity:
ifconfig
...
sit0 Link encap:IPv6-in-IPv4
inet6 addr: ::127.0.0.1/96 Scope:Unknown
inet6 addr: ::192.168.1.225/96 Scope:Compat
UP RUNNING NOARP MTU:1480 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
sixxs Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet6 addr: fe80::4830:xxxx:xxx:2/64 Scope:Link
inet6 addr: 2001:4830:xxxx:xxx::2/64 Scope:Global
UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1280 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:0 (0 MiB) TX bytes:0 (0 MiB)
...
$ ping6 google.com
PING google.com(lga15s29-in-x01.1e100.net) 56 data bytes
64 bytes from lga15s29-in-x01.1e100.net: icmp_seq=1 ttl=53 time=44.2 ms
64 bytes from lga15s29-in-x01.1e100.net: icmp_seq=2 ttl=53 time=47.1 ms
^C
--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 44.231/45.715/47.199/1.484 ms
Step 7. Start using your IPv6 routed subnet. First, you will want to edit your /etc/aiccu.conf file. Here is the diff:
-#setupscript /usr/local/etc/aiccu-subnets.sh
+setupscript /usr/local/etc/aiccu-subnets.sh
Now, create an executable script at /usr/local/etc/aiccu-subnets.sh with the following content:
#!/bin/sh
ip addr add 2001:4830:xxxx:Yxxx::1/64 dev eth0
Then restart aiccu: $ sudo /etc/init.d/aiccu restart
. Now, your eth0
will have its own IPv6 address in the routed (Yxxx) subnet.
Step 8. Enable IPv6 for the rest of your LAN. This step is also very easy.
We will install radvd ($ sudo apt-get install radvd
) and configure it
to advertise your routed network prefix. Create a file at
/etc/radvd.conf with the following content:
interface eth0 {
AdvSendAdvert on;
MinRtrAdvInterval 3;
MaxRtrAdvInterval 10;
prefix 2001:4830:xxxx:Yxxx::/64 {
AdvOnLink on;
AdvAutonomous on;
AdvRouterAddr on;
AdvValidLifetime 30;
AdvPreferredLifetime 20;
};
};
Now restart radvd: $ sudo /etc/init.d/radvd restart
. Now the rest of
your LAN is IPv6 enabled. Enjoy.
2012-12-04 WOD: Kettlebell bonanza
Going forward, I'm going to post a few workouts that I am doing that day, mostly so I don't forget them. Feel free to copy, comment, etc.
Warmup: 250 meters on the rower or 1/4 mile run
- 12 swings with 30#
- 12 bear squats
- 12 one-hand swing and switch 30#
- 12 lunges with KB hand switch 30#
- 12 getups
- 12 hand-switch box push-ups
- 12 box jumps
- 12 sumo squat high pulls 30#
Rest every 4 rounds, as many rounds as possible in 30 minutes.
How browsers should connect to web servers
The problem
Always on a quest to make the Web faster (see Ping Brigade), I've
been thinking about
ways to speed up fetching web pages. My investigation started with the
fact that many popular web servers are backed by multiple
IP addresses. For example:
$ host google.com
google.com has address 74.125.137.113
google.com has address 74.125.137.102
google.com has address 74.125.137.101
google.com has address 74.125.137.100
google.com has address 74.125.137.139
google.com has address 74.125.137.138
google.com has IPv6 address 2607:f8b0:4002:c01::8a
That's 6 IPv4 and one IPv6 address. Looking at this list, I don't know
which one is closer/faster/less loaded. How does, say,
a browser pick one of these? Well, typically one of these will be
picked at random and a connection will be attempted. The browsers
are smart enough to know whether the given machine has access to IPv6,
IPv4, or both and usually will prefer IPv6 if it is available.
After one address is picked, it is cached by the browser as the address
for the given hostname. The connection is attempted with a
fairly large timeout (30-45 seconds). If the connection succeed, the
browser proceeds as usual to fetch the web page over HTTP. If it fails,
the browser uses the next address in the list, making another attempt.
Notice how terrible this is for the user experience. If one of the 7 of
Google's servers is having an issue, then the user has to wait for
30-45 seconds before the next attempt is made. This is bizzarre,
especially since Google has no way to control the timeout of the
browser
(well, maybe Google does since they ship Chrome, but any other server
operator does not). This is definitely a setting you would want to
control:
saying that your web servers are expected to respond within, say 5
seconds, would give you control over the browser behavior. We do, in
fact have
something similar server-side, when we set up reverse proxies: you can
define how long to wait for a response from each upstream server before
moving onto the next one and marking the timed-out one as failed.
However, we have no such system to control browser behavior. I imagine
if we
were to add it, we could leverage DNS to supply such information.
I digress. The good news is that browser vendors can fix this with no
need for new infrastructure! All they'd have to do is change the
algorithm
they use to select the server they connect to. After playing around
with different ways of initiating TCP connections, I created a prototype
of
a Python framework for that attempts to make multiple connections and
selects the first one that succeeds. Some details:
The Hypothesis
Let's start with the idea that there are multiple A/AAAA records for the
hostname we are trying to connect to. This is certainly
the case with some very popular websites (Google, Facebook, etc.) My
hypothesis is that if we make multiple simultaneous connection requests
to each IP address, the first server to successfully complete the
three-way TCP handshake will fit one or more of the following criteria:
- The server is physically closer to the client than others.
- The server is closer to the client than others in terms of network locality (e.g.: if you are in San Francisco and the server is in LA, but your packets are routed through NYC, you are going to have a bad time.)
- The server has a faster, lower-latency internet connection than others.
- The server is less loaded.
- The server is up, while others are down.
The last condition is especially important: we don't want our users
waiting for 30-45 seconds just to realize that one of the servers is
down.
I expect that the other conditions will have a marginal decrease in the
time it takes to fetch a resource over HTTP as well. Having this
hypothesis,
an obvious solution becomes apparent: let's make simultaneous
connection attempts!
The Experiment
connie-experiment is the result of my work on this topic. It is a
Python framework
for making and measuring simultaneous TCP connections. It provides
several useful interfaces on top of Python's socket API (which is really
just
a thin wrapper around the BSD socket API). First, it provides the
connie_connect()
function which returns a connected socket to a
host/port combination as requested by the caller. Under the hood,
connie_connect()
does the following:
- Calls
getaddrinfo()
to get a list of IPs associated with a given host/port. - Caches the association between host/port and the list of IPs (currently for 60 seconds).
- Creates one non-blocking socket per IP address. Calls
connect()
on each. - Uses Linux's
epoll()
to look for the first connected socket. - If after the specified connection timeout no connection is established, it raises an exception.
- If one of the sockets successfully connects,
close()
is called on the rest of the sockets. The connected socket is returned.
Additionally, the codebase includes a similar implementation of a TCP
connection function called cached_dns_connect()
where the
host/port => IP addresses mapping is also cached for 60 seconds, but
only a single IP address is picked and a connection is attempted. This
function is provided to help create benchmarks against
connie_connect()
and the regular socket.connect()
.
Built on top of connie_connect()
and cached_dns_connect()
, are two
subclasses of httplib.HTTPConnection
which establish their connections using the two newly implemented
connection functions respectively.
Lastly, there are two benchmark files: one to test a TCP
connect/disconnect sequence, and another to test fetching web resources
over HTTP
after establishing connections using the different strategies provided.
The Results
I encourage everyone to try these out for yourselves. I was able to try
these on a few different setups and the results are below. As you will
see
connie_connect()
almost wins every time, sometimes by a significant
margin. In the cases where one of the servers is not reachable,
(think IPv6 address from a client with no IPv6 connectivity), that
server is simply ignored by connie_connect()
. Let's take a look:
The test was run 300 times per function, choosing to connect to one of:
google.com, apple.com, yandex.ru, facebook.com, maps.google.com, google.cn
Using a public Wi-Fi. IPv4 only.
$ python conbench.py
connie_connect: count 300, total 18.6567, per call: 0.0622, std dev: 0.003
cached_dns_connect: count 300, total 25.5700, per call: 0.0852, std dev: 0.003
plain_connect: count 300, total 32.9295, per call: 0.1098, std dev: 0.016
$ python httpbench.py
connie_fetch: count 300, total 61.3235, per call: 0.2044, std dev: 0.009
cached_dns_fetch: count 300, total 62.3081, per call: 0.2077, std dev: 0.005
plain_fetch: count 300, total 77.4779, per call: 0.2583, std dev: 0.009
Time Warner residential connection. Native IPv4 + tunneled IPv6 from he.net.
$ python conbench.py
connie_connect: count 300, total 21.7207, per call: 0.0724, std dev: 0.0032
cached_dns_connect: count 300, total 28.0278, per call: 0.0934, std dev: 0.0038
plain_connect: count 300, total 39.2406, per call: 0.1308, std dev: 0.0036
$ python httpbench.py
connie_fetch: count 300, total 62.5270, per call: 0.2084, std dev: 0.0107
cached_dns_fetch: count 300, total 63.9759, per call: 0.2133, std dev: 0.0063
plain_fetch: count 300, total 98.9979, per call: 0.3300, std dev: 0.0100
From a 100Mbit/s dedicated server. IPv4 only
$ python conbench.py
connie_connect: count 300, total 12.3907, per call: 0.0413, std dev: 0.004
cached_dns_connect: count 300, total 12.2159, per call: 0.0407, std dev: 0.004
plain_connect: count 300, total 13.4622, per call: 0.0449, std dev: 0.003
$ python httpbench.py
connie_fetch: count 300, total 32.6758, per call: 0.1089, std dev: 0.0060
cached_dns_fetch: count 300, total 36.5850, per call: 0.1219, std dev: 0.0064
plain_fetch: count 300, total 35.2439, per call: 0.1175, std dev: 0.0058
From a 1000Mbit/s dedicated server. Native IPv4+IPv6
$ python conbench.py
connie_connect: count 300, total 9.6421, per call: 0.0321, std dev: 0.0028
cached_dns_connect: count 300, total 14.9740, per call: 0.0499, std dev: 0.0039
plain_connect: count 300, total 14.0381, per call: 0.0468, std dev: 0.0028
$ python httpbench.py
connie_fetch: count 300, total 29.4031, per call: 0.0980, std dev: 0.0059
cached_dns_fetch: count 300, total 33.9132, per call: 0.1130, std dev: 0.0053
plain_fetch: count 300, total 32.2257, per call: 0.1074, std dev: 0.0056
As you can see, on residential connections connie_connect()
makes a
significant difference, so the hypothesis holds up. When run
in server environments with solid internet connections, it still makes
a significant difference when fetching web resources, but the time to
open/close a connection is very comparable to the canonical connection
strategy. Lastly, connie_connect()
works as promised when
one of the servers is down.
Pro's and Con's
There are extra benefits to using an abstraction like the one provided
by connie_connect()
. For one, IPv4 vs IPv6 is abstracted
away. Instead of looking at whether the system has a global IPv6
address and worrying whether the network is configured properly, you
can
rest assured that the connected socket returned to you works,
regardless of the underlying IP protocol. This way the transition to
IPv6
becomes a little more seamless. Secondly, DNS entries are cached in
memory for a small amount of time, preventing you from having to do
extra DNS lookups. Third, this connection strategy works for any code,
not just browser to webserver. For example, it can be used when
implementing client-side code for connecting to a REST or SOAP API, or
establishing any TCP connection.
There are some major negatives to using this project in production.
First, creating extra sockets and attempting to connect them will
result in more open files for your kernel to keep track of. Depending
on the workload, you may quickly run out of file descriptors which is a
bad thing. One way to remedy this would be to add a limit on how many
simultaneous connections will be attempted (my gut feeling is that 2-3
would yield good results).
Second, with every call to socket()
, connect()
, epoll()
, and
close()
you are introducing
a syscall, which will result in a context switch from your userspace
code to the kernel code. On a mostly idling workstation running a
browser,
this overhead will be minor compared to the latency of establishing a
TCP connection, but on a busy server with many processes all competing
for
system resources it can become more important. Additionally, memory
usage will be higher since more RAM is needed to keep track of TCP
connections
and polling objects.
Thirdly, the current implementation works on Linux only. This is due to
my choice to use epoll()
instead of the more portable,
but more limited select()
. This is an implementation shortcoming
which can be addressed by implementing the system-specific
polling mechanisms for the popular OS's of today.
Lastly, this code is experimental. It was written to run benchmarks, and
has not been vetted in long-running processes, etc. It is also not
thread-safe (the cache class has no locking). I do not believe it would
require much to get it polished enough to run on mission critical
systems, but that was not the goal of the project, so I am holding off
saying anything else until the project is properly reviewed by people
who,
unlike me, know what they are talking about.
You can get the source for connie-experiment from the GitHub
repo. Please feel
free to comment and contribute. I would love to hear what your thoughts
are. I would especially love to hear from the Google Chrome and Firefox
teams regarding the feasibility of getting this strategy implemented in
browsers.
What to Bring to a Mud Run, Part 2
My last post about mud runs generated quite a few pageviews, so I figured I'd post a quick update. Hopefully, this info will be helpful to people doing mud runs in the future. I recently completed my second mud run (Zombie Escape at Panic Point), which was a great experience. Here is what I learned there.
First, in addition to all the other things you should bring (towels, change of clothes, water, cash and a distinctive bag), you should also bring a snack. My preferred is trail mix and fresh fruits. You'll likely want to have something like a banana right before you run to make sure you have enough blood sugar to run fast.
Also, make sure that you bring all of these items with you to the race location. Leaving a change of clothes in the car is no good if you want to change after you shower. Panic Point had much better showers than the previous mud run I did, so this time I actually washed off. They also provided two tents, one for men, one for women, to change out of the muddy clothes. The bag check worked very well for me this time around so I felt no reason to keep all the things I brought in the parking lot.
Another interesting thing that I saw a lot of runners do is using head-mounted action cams. One guy actually had two of them, plus a tripod. I saw many videos (for example this one) on YouTube afterwards that look like they came from these cameras. I doubt it'll get you to Sundance, but it'd be at least fun to share it with family/friends. There was also a very cool, but short official video.
Lastly, this time around my team made our own T-shirts. We used white cotton-polyester blend shirts and iron-on transfers printed on a high quality inkjet. Surprisingly the transfers held up really well, though the shirts are no longer white. The best part: the price. We paid about $5/shirt, as compared to others at the event who ordered them online for $20-$30.
Introducing LetSlide
I recently had the pleasure of using landslide to create a couple of quick presentations. The premise here is great: write what you want to say in Markdown, get elegant slides back. However, while I am familiar with how to set up Python, install things from GitHub, etc., many are not. To give the power of landslide to the people (and to use it for myself), I created (drumroll please), LetSlide. Here, you have a simple web interface to create landslide presentations, and a place to host the results. Give it a whirl and let me know what you think.
What To Bring and How to Train for a Mud Run
On April 21st I did a mud run (Rugged Maniac) with a few of my friends. There were a few things I learned about it that I am going to pass on, or at least write them down so I don't foget them:
First of all there are certain things you should bring. I did not bring all these, and should have. Here is my list:
- Towels and rags. You will want these in your car on the way back. You could also use a large towel to wrap around yourself to change.
- A change of clothes. There were showers there, but people looked dirtier coming out than in. The mud is very sticky and a hose outside will not wash it off easily.
- Sports drinks/water. Don't buy it there.
- Cash. Almost nobody accepts credit cards there.
- A distinctive looking bag for all the stuff. Rugged Maniac had a bag check, but took forever to find it if the bag was not distinctive enough.
The question I asked very often of others who have done this run was "What should I wear?". The answer is not simple, since everyone has their own preferences. However, here are some nuggets of wisdom I managed to gleam:
- Do not wear anything white or light. It will be ruined by picking up the color of mud. That is unless you specifically intend to throw out whatever you are brining.
- Do not wear cotton. Spandex, polyester and rayon seem to do better with repelling mud. I ended up wearing a Nike short sleeve shirt with long spandex pants
- Wear shoes you are willing to throw out. I put mine through the washer twice and they are still very much mud color and in some cases full of mud.
- Consider Five Fingers style shoes. I do not own a pair, but one of my running mates did and used it. It sees to have held up well enough and he did not injure himself. If you already own a pair, consider using it.
- You will likely end up throwing out your socks and underwear.
"How do I train?" and "What obstacles will there be?" were other two good questions. In general the obstacles included crawling through the mud, jumping over things and climbing walls. As far as I know, there isn't much to do to train for crawling in the mud, but you can train for the jumps by practicing your broad jump. You should also work on functional fitness, as you will be tested when climbing walls. Thankfull, they were not too tall, and generally your teammates would be able to help you if you cannot make it over one. Going into it, I thought you'd have to be doing a lot of pulling your body up with your arms only, but that turned out not to be the case with this race. However, it seems that other races do have more strenuous obstacles that might require more upper body strength.
There was also lots of running on uneven terrain. You will want to make sure you have good balance and that you have strong ankles before race day. Rolling your ankle in the middle of the race, in the middle of the woods would really suck.
Selecting a mud run also seems to be an important consideration. If this is your first time, consider doing something on the simpler side, such as the Rugged Maniac. It was a 5k race, so most people end up finishing it in about an hour. There are other, much longer races out there that will take more out of you.
Now go out there and get muddy!
Two project sunset
I am sunsetting two of my pet projects due to lack of use. The first is LovelyCo.de. I never had time to market it properly so it got zero use since the launch.
The second, is DiscID DB. I was the only user of this project, and while it was a great experience, setting up and maintaining a public API and I learned quite a bit, I decided not to continue working on it.
If you have a particular interest in either of these two projects, please contact me via email (see the About page).
Page 1 / 3 »