Using "debootstrap" to build a custom debian based system

Linutop is a small device that can run Linux form a USB drive and can be used to browse the internet or use Office tools.

Found a useful article on the Linutop wiki about using debootstrap to create a custom Linux System based on Debian. Click here.

More information about Linux Live CDs based on Debian can be found on Knoppix.

Linux Innovations of 2007

Phoronix talks about the great Linux innovation of 2007.

To summarize, the following were some remarkable events:

1. Virtulization support in the kernel, along with VMWare's VMI, Xen Quest and QEMU accelerator going open source

2. Desktop Bling - Beryl merging back with Compwiz creating a "Compwiz Fusion"

3. ATI introducing brand new driver code-base along with support for Radeon HD2900XT and AIGLX. ATI/AMD releasing their GPU documentation.

4. SplashTop

Connecting a Bluetooth phone with Linux

Ars Technica has a useful article on getting your Bluetooth enable Mobile phones to talk to Linux and some more useful tips and tricks.

Taking Backup on linux

Linux Planet has a very useful series on taking backups in Linux by Carla Schroder, who has authored Linux Cookbook

Part 1
Part 2
Part 3

Useful tool to analyze network traffic on Linux

Darkstat, is a simple packet sniffer, runs in the background and gives you basic information about the traffic on your network.

On Ubuntu, install it using

sudo apt-get install darkstat

After installation,

sudo darkstat -i eth0

The data is displayed in the browser (http://localhost:666/)

"sudo rm -rf /" on Ubuntu is dangerous

To avoid killing your Ubuntu system add the following to /etc/bashrc and /etc/profile

alias rm=”rm –preserve-root”

Google Android SDK launched

Google has launched the SDK for Android. More information here.

Android in action


Architecture

Google Android

Google buys Android, BusinessWeek reports.

In the meanwhile, Nokia acknowledges the new Google Platform, while CEO of Symbian OS (of which Nokia own a 47% stake) disses Andriod! Sun backs the Google Phone OS idea.

To find the version of Ubuntu installed

$ cat /etc/lsb-release
OR
$ cat /etc/issue

To get the version of kernel and other information do,
$ uname -a

Update:

Just use

$ lsb_release -a

Novell Bangalores 250 US Jobs

Ouch!

Mozilla Prism

Mozilla Labs has launched Prism. The idea is to make the web applications like Gmail, Google Calender, Facebook etc into desktop application.

Prism lets you create desktop shortcuts for the websites that you would want to access by a single click and then opens these websites in their own window frame giving an impression that the application is being accessed from your desktop. Needless to say, the window frame is actually Firefox sans the toolbars!

Here is how Gmail looks in Prism!


Source: Wired News

Open Source games for Linux

Linux Journal is running an article on free open source games available for Linux.

I have installed the following games that are available on the Ubuntu repositories - Torcs, Trigger, Open Arena (based on Quake 3). Unfortunately my graphics card does not render these quite well. Time for a new machine, I guess!

$ sudo apt-get install trigger torcs openarena

LinuX-gamers.Net has an excellent repository of information related to games for Linux.

Audio file rippers and encoders on Linux

There is a nice article on PolishLinux that explains the process of ripping and encoding and also lists the tools that can be used on Linux to convert CDs to your desired encoding.

The article talks about the following:

Encoders
1. Lame for MP3
2. OGG-Vorbis - oggenc

Rippers
1. cdstatus
2. cdparanoia
3. abcde
4. ripperX
5. sound-juicer, Grip
6. k3b

Performing an operation on multiple files using a single command line in bash

This is quite useful if you need to move files from multiple locations to a single place.

Let's assume that you have been searching for all *.jpg files on your system and you need to move all of them to a specific folder /home/deadpan/images,

To get the files do,

$ find . -name "*.jpg" . This would get you all the .jpg files.

To move all these files from various location to a single folder do,

$ for file in `find . -name "*.jpg"`; do mv $file /home/deadpan/images; done

By using regular expressions in find, as discussed here, you can get very specific in your search.

Linux networking commands

Some useful networking commands. (Source wimi.com). Note that you would have to run these as a root on RedHat\Fedora or using sudo on Ubuntu\Debian

Display Current Config for all NIC's: ifconfig

Display Current Config for eth0: ifconfig eth0

Assign IP:ifconfig eth0 192.168.1.2

Assign IP/Subnet: ifconfig eth0 192.168.1.2 netmask 255.255.255.0

Assign Default Gateway: route add default gw 192.168.1.1

Assign multiple IP's: ifconfig eth0:0 192.168.1.2

Assign second IP: ifconfig eth0:1 192.168.1.3

Disable network card: ifconfig eth0 down

Enable network card: ifconfig eth0 up

View current routing table: route "or" route -n

View arp cache: arp "or" arp -n

Ping: ping -c 3 192.168.1.1

Trace Route: traceroute www.whatismyip.com

Trace Path: tracepath www.whatismyip.com

DNS Test:host www.whatismyip.com

Advanced DNS Test: dig www.whatismyip.com

Reverse Lookup: host 66.11.119.69

Advanced Reverse Lookup: dig -x 66.11.119.69

Ubuntu Hard disk Load\Unload bug

Read this. Quite scary!
I run feisty (beta) on a Dell Inspiron 9400 with a Hitachi HTS541616J9SA00 hard drive. After booting, the drive's power management settings are such that it spins down A LOT. To give you some statistics: the drive is rated for 600,000 load/unload cycles, and after 2.5 months of running Feisty I'm already at more than 56,000 load/unload cycles (and only 150 power cycles), according to the SMART data. At this rate the drive will be dead after 2.5 years, and I don't even use this computer for more than a couple of hours each day.”

It appears that there is a bug in Fiesty and Gusty which results in a lot of hard-disk Load\Unload operations that reduces the life of the disk drastically. This usually happens is the "Laptop Mode" is enabled along with Advanced Power Management.

Some developers have suggested a fix to disable APM for the drive which can be done by using:

$sudo hdparam -B 255 /dev/hda OR /dev/sda (if you have a serial ATA drive)

A value of 255 disables the APM which means that you might end up with a less battery life. If you want you can give a better figure (some guys have given 254, the value 1 is supposedly causing the problem).

Installing smartmontools is a good idea, as suggested by this Linux Journal article.

$sudo apt-get install smartmontools

Update:
More talk and suggestions here. The problem was slashdot(ed) yesterday!!

One Laptop per child

There is an interesting piece of info in the article 'One Laptop Per Child' Hit With Production Glitch, Shortages. The cost per laptop, which was supposed to be a $100 per laptop, has risen to around $188 per piece. The interesting thing to note is that the cost of standard PCs has declined to such an extent that one can get a Linux based Everex laptop for just around $298 from Wal-Mart.

Having said that, I still think the OLPC project is great owing to the fact the the laptops have been designed keeping children in mind - they are durable and run on just 2 watts of power. More info on OLPC here.

GIMP 2.4 released with a new look

GIMP has released the latest version of the software, version 2.4. Unlike other releases this one features a lot of new additions - New refreshed look, Scalable brushes, New selection tools, New align tool, Change in the menus and a lot more.

Needless to say people have already started talking about it at various forums, there is a flame war going on at slashdot, and a good discussion at OSNews. Penguin Pete gives his review here.

Ubuntu naming convention

A colleague of mine was curious about the unconventional naming style of Ubuntu releases and I told him it was based on the characters of the movie Toy story. After doing a little research I realized that I was wrong.

The convention is based on the following logic "Adjective + Animal Name" and hence Gusty Gibbon, Dapper Drake, Breezy Badger and so on.

Ubuntu releases Gusty Gibbon (7.10)

Gusty Gibbon was released a couple of days back and so far I have seen positive reviews - Wired, How to forge.

There are some people who believe that PCLinuxOS is a better distro than Ubuntu. I have been using Kubuntu since a long time now and I always end up getting the following problem on my old Toshiba laptop
1. The display is always screwed. I need to run dpkg-reconfigure xserver-xorg
2. My wireless adapter never worked
3. My sound never worked.

Otherwise working on Kubuntu has been like a breeze. I have had no problems trying to install new packages and the settings and configuration is fairly easy.

I wanted to try out PCLinuxOS (I have a DVD for that) but my laptop's drive is dead and it does not allow me to boot from a USB external DVD drive. I guess I would have to try it out using a virtualization environment like Qemu or VirtualBox.

Shell scripting errors on Ubuntu\Kubuntu

If you are using Kubuntu and trying to run shell scripts for (say) building a kernel or a tool-chain, you probably would have encountered some errors. This is because /bin/sh is actually a symlink to /bin/dash in Kubuntu which seems to have some problems running "sh" scripts.

The solution is to create a symlink to /bin/bash instead.

$ sudo rm /bin/sh
$ sudo ln -s /bin/bash /bin/sh


I faced the same problem while compiling the LTIB on my machine.

itoa() and scanf() alternative

itoa() is not defined in ANSI-C and is also not part of C++, but is supported by some compilers. It is however not advisable to use it in your code even if it is being supported by your compiler.

The better solution is to use sprintf().

Replace,

itoa (number, buffer, radix) with
sprintf (buffer, "%d", number);

Reconfiguring the X server on Ubuntu

If, for some weird reason which could probably be any recent update, your screen resolution goes for a toss and if in KControl display settings the only possible resolution which you can move to is 640 x 800, then you need to reconfigure the X server.

Either you can open the /etc/X11/xorg.conf file manually and break your head or simply reconfigure X server by using the following command

$sudo dpkg-reconfigure xserver-xorg

This would start a configuration wizard. All you need to do is keep clicking "Yes" and then at the end select the correct resolution supported by your monitor like "1280 x 1024", save the configuration file and restart the X server.

Hopefully your shall be able to get back to the old resolution.

Mono's io-layer

I have been closely following mono since a long time now and I have even had a chance to work on it. There is a lot one can learn by just looking at the mono code but something which I found useful while you are porting windows applications to Linux is mono's io-layer.

The mono developers used the Windows APIs like EnterCriticalSection(), InitializeCriticalSection() and so on and then implemented these functions and provided it as an IO abstraction layer. If you have a piece of code that was written for Windows and you need to port it to Linux, you can link it against mono io-layer and it works fine. I have done that for some of the code and it worked fine. There are certain APIs that mono has not implemented (They just implemented the ones which they were using, covers most of the basic stuff). So you might have to implement those and if you do, think of submitting the patch to the mono developers.

Using find and grep on linux to search for strings within files

If you are working from the console and you need to find a string or let us say a function name in some file you can use grep and find to help you out along with some regular expressions.

Getting the files first
Lets say that you are looking only for files with ext *.c use:

$ find . -type f -name "*.c"

This finds all such files from (.) the current directory. If you need to find something from some other directory use:

$find /path_to_the_directory -type f -name "*.c"

The -type f looks only for files.

Now if you want to find both *.c and *.h files use:

$find . -type f -name "*.c" -or -name "*.h"

You can use -or (OR) -o to add more such extensions

$find . -type f -name "*.c" -or -name "*.cpp" -or -name "*.h"

This would list all the files with the extension *.c, *.cpp and *.h

If you want to use regular expressions you can use:

$find . -type f -name '\*\.[ch]'

This will list all the files with extension *.h or *.c

Similarly if you want files which start with a Capital letter you can use

$find . -type f -name '[A-Z][a-x]*\.[ch]' and so on.

or if you want a complex regular expression:
$find . -type f -name '[A-Z0-9._%+-]+\.[ch]'

Searching the string in the files got
Now that we can got our list of files which we are interested in, let us try to search for our string. Assuming that I am trying to find (say) "MyFunction" I would use

$ grep "MyFunction" `find . -type f -name '[A-Z]\*\.[ch]'`

This would search in each and every file as found by find and would try to grep for MyFunction.

If you don't know the case, you can ignore it using -i with grep

grep -i "MyFunction" `find . -type f -name '[A-Z]\*\.[ch]'`

Update: Using egrep with Xargs to handle long argument list -> here

Using etags with emacs

1. First tag the entire code. Go to the first directory and run

$ etags `find . -name "*.c" -o -name "*.cpp" -o -name "*.h"`

This is if you are looking for C\CPP\H files. This will generate a "TAGS" file.

2. Open emacs and go to your code directory

3. Let emacs know about the tag file

M-x visit-tag-table [location of the TAGS file]

4. Select the function and do the following

M-. (which translates to ALT-.)

This would take you to the function anywhere in the entire directory structure

5. To go back use

M-*, which translates to ALT-SHIFT-*

Embedded Conference Bangalore Notes

I went to the Embedded Systems Conference that was held at the NIMHANS convention center. Some of the session were very useful while others were very boring and useless. I shall write about some­ good sessions that I attended.

Beyond C, National Instruments

Although in the beginning I was thinking that this session would talk more about some new language for writing code for embedded systems it turned out to be a demonstration session by a couple of good speakers from National Instruments, Bangalore who demonstrated the working of LabView.

The presentation was nice coupled with some videos of people all around the world showing how they have successfully used LabView in the areas of medicine, robotics (Romela, Virginia Tech), speech controlled wheelchair (Ambient)

LabView is a graphical system design platform using which developers can write code as blocks and have it downloaded directly into the FPGA. Apart from that there are modules available like LabView-RT which can write it along with an embedded OS like VxWorks cutting down on the development time.

Apart from NI there are some other vendors as well like Tolemey, Malworks, Singular etc.


Static Code Analysis - David Kalinsky


David Kalinsky (http://www.kalinskyassociates.com/DavidBio.html) is a PhD working on high availability safety critical systems. He gave a nice presentation on Static Code Analysis tools.

To start with David said that the static code analysis tools are not 100% ready. There are certain vendors who claim that the tools they have are good but they do not take care of many important things for Multithreaded applications. As an example he said that if we have 100 threads in our application with each thread having a 100 lines of code it is not wise to invest in the SCA tools at this point in time. However is we have 10 threads with 10,000 lines of code SCA might be useful clearly suggesting that for mutlitasking apps SCA is not yet ready.

Before moving into details of Static analysis David talked about C language and how Dynamic analysis is not a completely fool proof mechanism for ensuring that the code is covered.

The C\C++ compilers (if not used along with warning enabled like -Wall etc) would excuse the developers use of dangerous code and would generate the assembly for it. He suggested that the compiler ought to be more critical to stop such writing practices.

Talking about Static analysis tools now, David said that there have been 2 generations of SCA tools so far and the 3rd generation is being made and getting better by the day.

First Generation: In England, long time back, some guys created a subset of C which they called MISRA-C. This contained all the things which were supposed to be safe and this was used in many cars (automotive business) but the MISRA-C compiler only took care of abour 2/3rd of the restrictions because 1/3rd of these restrictions can not be taken care by the compiler.

Second Generation: In Virginia, there were some developers who went on to make the open source version called LINT. It worked (works) like a compiler but the difference being that it does not produce code but checks the code for vulnerabilities. The problem with lint is that it is very shallow when it comes to reporting bugs. It would say "there MIGHT be" in the report as if it is not sure if there is a bug or not.

Third Generation:The third generation tools dig deep but there still might not be a 100% certainty that all the bugs would be found. The question that comes to the mind is how deep should it dig?

To understand that we need to look at the criteria for code coverage. One is doing it as what we call "Line Coverage". This case of line coverage we need to run enough tests in order to ensure that each and every line of code has been covered. Sometimes it is easier said that done. Even if the whole testing is automated it is virtually impossible for doing a 100% line coverage and it is known that 2-5% of code is never tried at the first place.

But David says that even a 100% line coverage is not good. The better option is what he calls "Path Coverage". Path coverage is based on the criteria that whenever a branch is encountered, the tool needs to go inside at least once, e.g functions called within functions and so on, switch cases, if and else statement etc. But even in case of path converge it would take years to write test cases to get a 100% path coverage but there are scenarios like in case of D0-178B, path coverage is required. Some other metrics are Decision Coverage, Condition Coverage, Multiple Decision Coverage, Modified Condition + Decision Coverage (MC/DC)

Dynamic Analysis

- Observe an executable at runtime
- Is very useful for dynamic memory corruption
- Write test cases that are realistic and relevant to the code.
Valgrind is a GNU Dynamic analysis tool

Downsides

1. You got to have software that runs first (executable). Unit testing finds small bugs, got to wait till integration testing phase for the bigger ones

2. Analysis is slow

3. We start cutting corners only testing the case that we planned for and in some cases test are skipped.

4. The results are sometimes non-deterministic. There might be bugs that are not repeatable (might be at the interleaving between the threads - test cases might not cause the interleaving that is required.

Static Analysis

- Defects are detected early
- No test cases required, smart algorithms do it.
- Analysis is fast
- Analysis can be deterministic (because the analysis is done ofline

Downsides

1. Static analysis tool don't yet understand all the languages (only supported for C\C++)

2. If you have assembly code written within C (as an example), the code is treated as a benign black box because it does not understand the language. It starts making conservative assumptions about the code.

3. Sometimes there are false positives. (False bugs, usually 1%)

4. Sometimes there are false negatives (Tool will miss the bug)

ADA compiler has a much better compiler when it comes to not letting people write code that is dangerous which is the reason why ADA is still the preferred language to write code in Aerospace.

The thing to note is that Dynamic Analysis Tools and Static Analysis Tools complement each other and both should be used.

Working of a Static Analysis tool

These are the steps which are usually taken

1. A call graph is created using code, makefiles etc.

2. The functions are examined bottom-up. Every single function is looked and any suspicious activity is bookmarked (like pointer assignment, dereferencing, etc)

3. Every function is the call tree is again reviewed taking into account the path of the function and every sub-function (control flow graph)

4. The code defects are noted and reported.

Tools

This website lists all the static analysis tools available on the face of the earth and it also has the name of the three which David suggested - Coverity, Klockwork and Polyspace


Learning from Disaster - Jack Ganselle

This was a very good and interesting session by Jack Ganselle (http://www.ganssle.com/bio.htm) a very well know person in the field of embedded systems. Jack talked about small things that people do not take care in their design which results in huge blunders and throughout the presentation he gave lucid examples from real life disasters. Here are a few

Tacoma Narrow Bridge

The USP of that bridge was that it was made at the cheapest cost but the bridge had some serious problems. During high winds the bridge would start resonating in a wave like fashion (Check the video on Wikipedia, http://en.wikipedia.org/wiki/Tacoma_Narrows_Bridge) which ultimately led to its destruction. The flip side was that the person who had designed this bridge (Leon Moisseiff) had designed some bridges in the past that had the same problem.

So,

1. Cheaper sometimes turns out to be more expensive. (The bridge had to be remade)

2. Management might want less cost but it can cheat Physics (Science)

3. We need to learn from the past (Now all bridge design are tested in a wind tunnel in the US)

4. The problem is that we keep doing the same thing over and over again in the embedded world and do not share with anyone.

Clementine Lunar Failure

Clementine (http://en.wikipedia.org/wiki/Clementine_mission) failed because:

1. Schedules were tight and people were working for 60-80 hours. Schedules can't rule because tired people make mistakes.

2. Software that was put in the machine was not tested.

3. There were watchdogs but they were not used.

4. There was no version control system used.

Mars Exploration Rover


The rover was supposed to work only for 90 days and it still sending data to NASA. But initially it had a problem. When it started its work and began the drilling process it simply stopped. The problem was the scientific data that was being created was being put in the flash file system and it became full. The engineers tried to free the data like delete it but it being a FAT type of file system, the directory structure still persisted. There was a watchdog which was taking care of the exception and would restart the rover and every time it would encounter the same problem and restart again and again.

The rover was to work for 90 days but it was never tested for more than 9 days. Exception handling was awful. It seems 6 other NASA missions had the same problem (as they used the same OS) - we got to learn from our mistakes and past experiences.

Ariane 5

Ariane had great first four launches. For the 5th one junta was so confident that there was a payload of half a billion dollar for this launch. But Ariane 5 blasted in the air after 40 second into the launch.

The problem was with the Inertial Navigation System. The Ariane folks changed their hardware but used the old code. There was a place where a 64 bit value was being converted to a 16 bit INT value. The exception occurred and it shut the Inertial Navigation System down and there was no backup!

So the learning are

- we need to be very careful with ported code
- never assume that the software would never fail
- test everything.

Therac 25

This was a radiation therapy machine (http://en.wikipedia.org/wiki/Therac_25) for treating tumors which generated a serious abnormality due to which 6 people were killed due to overdose of radiation. There was a bug which would continue to say that dosage was not given even when the operator had pressed the button. So the operator would press the button again and the system would repeat the same until so much radiation was given to the patient that he\she was dead!

There is an MIT paper about the flaw which can we read here.

- The entire code was written by one person and he left the company. Perform code inspections
- They were using a homegrown RTOS which had a sync problem. Use only tested and certified RTOS.

There are a lot of examples given in the actual slide presented by Jack which is attached with this post.

Overall learnings

1. Do code inspections

2. Testing has to be adequate

3. Simulation is good but at the end it is not reality. Perform testing on real systems

4. Exception handlers are constant set of problem. Write good handlers.

5. Watchdogs should be used. They save lives

6. Use the methods of "Design by contract" (http://en.wikipedia.org/wiki/Design_by_contract )

7. Use a version control system

8. Be terrified of the C language

C (worst case) 500 bugs / KLOC
C (automatic code generation) 12.5 bugs /. KLOC
ADA 4.8 bugs / KLOC
SPARK 4 bugs /KLOC (The compiler had a static analysis tool)

9. Think of using MISRA-C

10. Use static analysis and dynamic analysis (Valgrind\LINT)

11. Schedules can't rule as tired people make mistakes

12. Reuse, sometimes, is very difficult.

13. Be wary of financial shortcuts. Management would always want something at a low cost.

14. Conduct scientific post-mortems

15. Last but not the least, "Learn from the mistakes for others"

QPS - a GUI Process manager for Linux

QPS is a X11 based visual process manager. Although you can get the same thing using top from the console, QPS lists the process same as "pstree" and also provides features to filter the processes and gives a graphical representation of the CPU and memory usage just like top.

On Ubuntu, you can install it using:

$sudo apt-get install qps
$qps&

Once started it starts as an icon next to the clock on the taskbar.

Read more about it here