Linux kernel deferred stack trace

Stack trace dump is an essential debugging aid. In the Linux kernel printing the current stack trace is simple:

#include <linux/printk.h>

dump_stack();

Sometimes, however, you want to capture the stack trace for deferred printing. This is useful when the indication that something went wrong appears at a later point in time, when the original execution is long gone. To do so use save_stack_trace().

#include <linux/stacktrace.h>

#define STACK_DEPTH 16

static unsigned long entries[STACK_DEPTH];
static struct stack_trace trace = {
    .entries = entries,
    .max_entries = ARRAY_SIZE(entries),
};

int some_routine(...)
{
    ...
    /* save our stack trace */
    trace.nr_entries = 0;
    trace.skip = 2; /* skip the last two entries */
    save_stack_trace(&trace);
}

To print the stored trace use print_stack_trace().

int some_other_routine(...)
{
    ...
    if (something_went_wrong)
        print_stack_trace(&trace, 2);
}

This code example should work in kernels since version 2.6.22. Earlier kernels have a little different save_stack_trace() signature. See commit ab1b6f03 for details.

Posted Sun Jun 3 14:44:19 2012 Tags:

Setting Linux kernel module parameters

Many Linux kernel modules have parameters that can be set at load time, boot time, and sometimes run-time. In the following I'll demonstrate each method.

Setting module parameter at load time

The easiest way to load kernel modules at run time is using the modprobe command. To set a module parameter put the parameter name and value in the modprobe command line:

modprobe foo parameter=value

The command modinfo lists the parameters that a given kernel module accepts, with the expected type of each parameter. For example, on my Linux 3.2 based system the command modinfo ambassador shows the following parameters info:

parm:           debug:debug bitmap, see .h file (ushort)
parm:           cmds:number of command queue entries (uint)
parm:           txs:number of TX queue entries (uint)
parm:           rxs:number of RX queue entries [4] (array of uint)
parm:           rxs_bs:size of RX buffers [4] (array of uint)
parm:           rx_lats:number of extra buffers to cope with RX latencies (uint)
parm:           pci_lat:PCI latency in bus cycles (byte)

Simple values of type byte or uint are represented by a number:

modprobe ambassador debug=1

Array values are set using a comma separated list of values:

modprobe ambassador rxs=1000,2000,3000,4000

String (charp) values are set using a string:

modprobe parport_pc init_mode=epp

Setting module parameters at boot time

When a module is compiled into the kernel you can't load it at run time, and you can't set its parameters either. You can, however, set the module parameters from the kernel command line as described in Documentation/kernel-parameters.txt. The equivalent of the modprobe commands above, are the following strings in the kernel command line:

ambassador.debug=1

ambassador.rxs=1000,2000,3000,4000

parport_pc.init_mode=epp

Setting module parameters at run-time

Sometimes a kernel modules allows setting a parameter at run time. In this case you'll find the parameter under /sys/module/modulename/parameters/, with writable file permissions. The debug parameter of the ambassador module is an example of such a parameter. Set a value to this parameter with a simple echo command:

echo -n 1 > /sys/module/ambassador/parameters/debug
Posted Sun May 20 14:05:19 2012 Tags:

Two git tips: list remote repo; show direct merge path

Listing remote git repositories

Sometimes public git repositories have no convenient web interface. Their existence can be inferred from a mention in a git merge commit, like this one. To get a list of branches in a repository using the native git protocol, use the git ls-remote command:

git ls-remote -h git://sources.calxeda.com/kernel/

Use -t to get tags listing.

Show merge path log

By default git log does not necessarily reflect the actual "ancestry" line of commits. To know which merges a given commit went through on its way to the main development tree, do

git log --ancestry-path commit1..commit2

where commit1 is the commit you are interested in, and commit2 is any random later commit, such as a version tag.

Posted Sun May 6 20:54:16 2012 Tags:

Introduction to Cross Compilation, Part 2

The first part in this series introduced the concept cross compilation. This post is about getting a cross compiler.

Obtaining a Cross Compiler

The easiest way to obtain a cross compiler is to download a ready made pre-built one. Besides being easy to obtain a pre-built binary toolchain is the most useful for the general case of building a kernel and a userspace filesystem. Some special cases require a specially tailored toolchain built from source. I'll show how to build a toolchain from source in the next post.

A short terminology note: in the following text I use the terms "cross compiler" and "toolchain" interchangeably. The have the same meaning in this context. The term "toolchain" seems to be more popular, however.

Sourcery

The most well known source of pre-built cross compilers is the embedded software division of Mentor Graphics, formerly known as CodeSourcery, an independent company that Mentor has acquired in 2010. They release the "Sourcery CodeBench Lite Edition" free of charge. Sourcery CodeBench is a collection of cross compilers for several CPU architectures, including ARM, PowerPC, MIPS, and Intel x86 among the others. For each architecture there are a number of target options. The one you need for embedded Linux work is the "GNU/Linux release". Always select the latest version, unless you have a very good reason to avoid it. Then, there are a few packaging formats to choose from. I prefer the "IA32 GNU/Linux TAR" format. Installing it is just a matter of extracting the tar file in the /opt directory. For example, to install the latest MIPS toolchain do as root

tar xjf mips-2011.09-75-mips-linux-gnu-i686-pc-linux-gnu.tar.bz2 -C /opt

One big advantage of Sourcery's toolchains is that those making them, former CodeSourcery employees, are deeply involved in upstream development of the GCC compiler.

Linaro

The Linaro organization also releases pre-built cross compilers for their target platform, newer ARM processor based on Cortex-A. Download the latest version from here. You need the "Linux binary" one.

Using the Cross Compiler

This is just a quick peek at cross compiling for the impatient new embedded Linux coder. I'll come back to this issue later in some greater depth.

First, put your newly installed toolchain in your path. For example, the Sourcery MIPS toolchain mentioned above needs the following command:

export PATH=$PATH:/opt/mips-2011.09/bin

Create a simple "Hello World" program, and save it in hello.c:

#include <stdio.h>

int main (void)
{
    printf ("Hello World!\n");

    return 0;
}

Compile your program using the MIPS toolchain as follows:

mips-linux-gnu-gcc -Wall -o hello hello.c

Copy the resulting hello binary file to you target machine and run it there. If all goes well you should see the expected output.

There are many details to get wrong here, ranging from ABI issues, to C library and kernel version compatibility. I'll cover some of these issues in future posts.

Posted Tue Apr 10 14:59:13 2012 Tags:

Introduction to Cross Compilation, Part 1

This post is the first in a series on cross compilation. In this series I'll introduce the concept of cross compilation, and how to used it. Although there are many different uses for cross compilation, I'll focus in this series in its use for embedded Linux systems development.

What is Cross Compilation?

When you develop a desktop or server application, almost always the development platform (the machine that runs your compiler) and the target platform (the machine that runs your application) are the same. By "platform" I mean the combination of CPU architecture, and Operating System. The process of building executable binaries on one machine, and run them on another machine when the CPU architecture or the Operating System are different is called "cross compilation". A special compiler is needed for doing cross compilation that is called "cross compiler", and sometimes just "toolchain".

For example, desktop PC application developers for Windows or Linux can build and run their binaries on the very same machine. Even developers of server applications generally have the same basic architecture and Operating System on both their development machine and server machine. The compiler used in these cases is called "native compiler".

On the other hand, developers of an embedded Linux application that runs on a non PC architecture (like ARM, PowerPC, MIPS, etc.) tend to use a cross compiler to generate executable binaries from source code. The cross compiler must be specifically tailored for doing cross compilation from the development machine's architecture (sometimes called "host"), to the embedded machine's architecture (called "target").

Note: cross compilation is only needed when generating binary executables from source code written in a compiled language, like C or C++. Programs written in interpreted language, like Perl, Python, PHP, or JavaScript, do not need a cross compiler. In most cases interpreted programs should be able run unchanged on any target. You do need, however, a suitable interpreter running on the target machine.

What is Cross Compilation Good for?

I have covered above one reason for doing cross compilation, that is, the target machine has a different CPU architecture that the development host. In this case cross compilation is necessary because the binaries that the native compiler generates won't run on the target embedded machine.

Sometimes cross compilation is not strictly necessary, but native compilation in not practical, or inconvenient. Consider, for example, a slow ARM9 based target machine running Linux. Having the compiler run on this target will make the build process painfully slow. In many cases target machine is just under-powered, in terms of storage and RAM, for the task of running a modern compiler.

Practically speaking, almost all embedded Linux development is being done with cross compilers. Strong PC workstation machines are used as development hosts to run the development environment (text editor, IDE), and the cross compiler.

In the next post in this series I'll show how get a cross compiler for embedded Linux development.

Posted Sun Mar 25 21:47:18 2012 Tags:

Short KGDB Guide for Embedded Linux kernel Debugging

Here is a short writeup of my experience with KGDB for debugging the Linux kernel running on a PowerPC target from a standard PC host. The PC host and the target board where connected using a RS232 cable. In my case, since PCs these days come with no built-in RS232 connector, the cable was actually an USB-to-RS232 cable.

The list below only describes my setup. For the more details and options see the full guide.

Also note that although this writeup describes cross debugging of a PowerPC target, the same should work for other targets, provided that your have the correct cross gdb for your target.

  1. Make sure that your kernel .config file includes the following options:

    CONFIG_MAGIC_SYSRQ=y
    CONFIG_KGDB=y
    CONFIG_KGDB_SERIAL_CONSOLE=y
    

    You'll find these options under the "Kernel hacking" menu.

  2. On your target board set the serial communication parameters:

    echo ttyS0,115200 > /sys/module/kgdboc/parameters/kgdboc
    

    You can also use the kgdboc kernel parameter for this, see the full guide.

  3. Load the kernel vmlinux file into your cross gdb on the host PC machine:

    powerpc-linux-gnu-gdb vmlinux
    
  4. On your target board trigger kgdb using SysRq:

    echo g > /proc/sysrq-trigger
    

    For more information about what SysRq is and what it is good for see the Documentation/sysrq.txt file in the Linux kernel source.

  5. If you are connected to the serial console of your target board, detach from the console without resetting the serial line. The details of how to do this are specific to your terminal program. For example, for picocom users the sequence is Ctrl-a, Ctrl-q.

  6. In your cross gdb running on your host PC machine set the correct baud rate for your serial line, for example:

    (gdb) set remotebaud 115200
    
  7. Finally, attach to the target board:

    (gdb) target remote /dev/ttyUSB0
    
  8. Once you hit continue in your gdb session, and none of your breakpoints triggered, you can manually break again into gdb with the g command of SysRq.

Posted Sun Mar 11 14:12:00 2012 Tags:

Get Input Keys Status Under Linux

Sometimes direct access to the Linux input subsystem is useful. In highly space constrained environment you might find yourself without any input wrapper library. This is the case of initramfs. In order to keep boot time short, initramfs must be as small as possible. Usually my initramfs contains little more than a statically linked Busybox.

In one of my projects the system user could force a special boot mode by pressing a magic key combination on the keypad. This is useful for recovery mode, software upgrades, etc. So, how do we know what keys are currently pressed? The EVIOCGKEY ioctl comes to the rescue. EVIOCGKEY macro is defined in the linux/input.h header. You provide a bits array buffer, to this ioctl. Each bit represents a key code according to its offset in the array. The key code numbers are also defined in linux/input.h.

Here is an short example (with error checking and common headers omitted for brevity):

#include <linux/input.h>

uint8_t keys[16];

fd = open("/dev/input/event0", O_RDONLY);
ioctl (fd, EVIOCGKEY(sizeof keys), &keys);

for (i = 0; i < sizeof keys; i++)
    for (j = 0; j < 8; j++)
        if (keys[i] & (1 << j))
            printf ("key code %d\n", (i*8) + j);

Make sure that the keys buffer is large enough to hold the highest key code number you are interested in.

Posted Sun Mar 4 20:27:27 2012 Tags:

Use of objdump to Disassemble PowerPC e500 Specific Instructions

Whenever you do objdump -d on an object file for the e500/e500v2 variants of PowerPC you may notice lines like:

c0000018:       7c 00 1f 24     .long 0x7c001f24

in the middle of the instructions stream. The thing is that objdump doesn't know the exact PowerPC variant of the object file target. This is what the -M parameter is for. Add -M e500x2 to specify e500v2, or -M e500 for e500v1. The line above should now look like:

c0000018:       7c 00 1f 24     tlbsx   r0,r3
Posted Sun Feb 26 15:15:24 2012 Tags:

Verifying DKIM Signatures From Command Line

I had never felt the need to verify a DomainKeys Identified Mail (DKIM) message myself. Then, some day, I noticed a suspiciously looking message in my Inbox. The Subject was "Hello Baruch". Since I didn't know the sender, I thought it's definitely spam message. I took a quick look at the message body, and then, just before hitting "Delete", I noticed that this is a job offer from Google. Well, at least that what the message text said. I took a second look and then decided to give it a chance. But before I had responded to this offer I wanted to make sure that this email has travelled through Google's servers. This is where DKIM proved useful.

To verify the DKIM signature of an email message, download and install pydkim. If your are a Debian/Ubuntu user just do

apt-get install python-dkim

or

apt-get install python3-dkim

Save the message in a RFC 822 formatted file and pipe it through the dkimverify script:

dkimverify < email.mbox

If the message is authentic you should see

signature ok

with 0 exit status. Otherwise, the output is

signature verification failed

with exit status of 1.

Posted Tue Feb 7 14:36:02 2012 Tags:

WiFi Sniffer Setup With Linux mac80211 Drivers and Wireshark

Newer 802.11 Linux drivers written for the mac80211 Linux WiFi networking layer should all support sniffing of all air traffic. You'll need to have the new nl80211 based iw tool, which is replacing the older Linux Wireless Extension based iwconfig.

First, create a "monitor" interface:

iw dev wlan0 interface add wmon0 type monitor

where wlan0 is the name of your WiFi network interface, and wmon0 is the name of the newly created monitor interface. Bring wmon0 up with:

ifconfig wmon0 up

Now start the capture, and save to a file with:

dumpcap -i wmon0 -w wlan0.pcap

This command saves the captured packets in wlan0.pcap. Later you can open wlan0.pcap with Wireshark, and examine the packets.

Posted Sun Jan 29 14:47:49 2012 Tags:

This blog is powered by ikiwiki.