Quantcast
Channel: STLinux - ARM
Viewing all 30 articles
Browse latest View live

Direct debugging of executables

$
0
0

STWorkbench does not directly support the direct debugging of executables built outside the IDE. However, these steps allow you to run or debug an executable built using the command line tools.

  1. To set a preference to ignore build errors when launching a run/debug, select Window > Preferences then select Run/Debug > Launching.

  2. In the last but one section, Continue launch if project contains errors, check that Always is selected.

  3. Preferences

  4. Select File > New > Project then select the C > C Project wizard.

  5. Enter a name for the project in the Project name text box

  6. Select Makefile project from the Project types list

  7. Select ST40 Linux GCC from the Toolchain list.

  8. Uncheck Use default location and browse to the directory containing your executable.

    Note: The wizard will warn that "Directory with specified name already exists!". This is expected and can be ignored.

  9. Click Next.

  10. In the project settings, make sure the binary parser is set to the ST Elf Parser. Later on you can modify this setting using Project > Properties.

  11. Click Finish. If a confirmation appears, click Yes. The project is created.

  12. You can now create a new run/debug configuration for your project in the normal way. Make sure you have selected the STLinux Target Application launch configuration type and that you have chosen a target that is booted and running.

    Direct Debugging

    If no source code is available for the executable it should still be possible to run it and to debug the code in instruction stepping mode, using the disassembly.

  13. Disassembly

For more details on how to set up standard make projects and all other aspects of the tool please see the documentation, the training material and the online help.

Architecture: 

Browsing remote filesystems

$
0
0

Browse external filesystems

All STLinux systems have a filesystem. You can browse the files on the target using the Remote Filesystem view, which also allows you to transfer files to and from the target.

  1. Select Window>Show View>Other>STLinux>Remote Filesystem.

  2. The Secure connection check box is checked by default and connects to the remote system using ssh. If you do not want an encrypted connection, uncheck it to use rsh.

    The root user must be able to log in to the target using SSH with an empty password or without being prompted for a password. This is the default, provided the root user has remotely logged in at least once previously.

  3. Enter the remote system name or IP address in the Target field.

  4. Press Return. The Progress Information shows Executing remote command.

  5. The filesystem of the remote system is displayed.

Six icons are used.

      file
      executable
      folder
      symbolic link
      block device
      character device

Upload files to target

  1. From the buttons or drop-down menu, click Upload files to target. Or select File>Upload. The Upload window appears.

  2. The check box allows connection using scp (checked, encrypted) or rcp (unchecked, not encrypted).

  3. If you used File > Upload, in the Target field enter the remote system name or IP address.

  4. In the Upload field, enter the file or folder to be copied to the remote system.

  5. In the To field, enter the destination directory to be used on the remote system.

  6. Click OK. The Progress Information shows Copying file(s) to target...

Copy to workspace

  1. Select a file or folder to copy from the remote filesystem to the STWorkbench workspace.

  2. From the buttons or drop-down menu, click Copy to workspace. The Copy to workspace dialog box appears.

  3. Select a folder and click OK.

Refresh

Refreshes the Remote Filesystem view.

Delete

Deletes files or folders from the remote filesystem. This is greyed out if nothing is selected.

Insert module

Inserts modules into the kernel. This is greyed out if no modules are selected.

Architecture: 

Managing kernel modules

$
0
0
  1. Select Window>Show View>Other>STLinux>Kernel Modules.

  2. Make sure the target is booted and running. Enter the remote system name or IP address in the Target field.

  3. The Secure connection check box is checked by default and connects to the remote system using ssh. If you do not want an encrypted connection, uncheck it to use rsh.

  4. Press Return. The Progress Information shows Executing remote command.

  5. In the Kernel Modules view, the left hand pane lists the kernel modules. Click a module.

  6. The right-hand pane shows the module information.

Remove selected module

  1. Select a kernel module in the left-hand pane of the Kernel Modules view.

  2. From the buttons or drop-down menu, click Remove module.

  3. The Progress Information shows Executing remote command.

  4. The selected kernel module is removed from the list.

Architecture: 

Using STWorkbench to develop existing applications

$
0
0

Introduction

STWorkbench is an Eclipse-based IDE shipped with the STLinux distribution. It supports development and debugging of STLinux applications. STWorkbench can be easily used to drive an existing make system, without the need to convert applications to the STWorkbench build system ("Managed Make").

This describes how to import an existing software tree into an STWorkbench project, build it and debug it.

STWorkbench concepts

To simplify using the interface, here are some of the main aspects.

  • Workbench - the main Eclipse/STWorkbench window.

  • Workspace - a container of projects.

    On startup a directory is chosen for the workspace. Projects can be either copied into that directory or developed in their original location. In either case they are displayed in the C/C++ Projects view, which is the main interface to the workspace.

    Note: Workspaces are handled individually for each user and do not need to be archived under source control. The Eclipse/STWorkbench ClearCase plug-ins map one workspace to each ClearCase view.

  • Project - a set of files and folders from which one or more executable or library is built.

    A project consists of a standard source tree, to which STWorkbench adds two additional files in the top level: .project and .cdtproject.

  • View - a tabbed sub-window in the workbench.

  • Perspective - a set of views.

    Perspectives allow a set of related views to be opened and closed together. For example, there is one perspective for C/C++ development and another for debugging.

  • Launch configuration - the settings required to run and debug an application.

    Once a launch configuration is created to run an application, it can also be used to debug the application.

Importing an existing application

To work on an application in STWorkbench, it must be imported into a project in the workspace. The easiest way is to wrap a project around the existing application tree. This writes two dotfiles (.project and .cdtproject) into the top level of the filesystem tree. No other changes are made to the application, so it can still be developed outside STWorkbench as before.

There are several different project types. For a C/C++ project with an existing make system, the correct project type to use is Makefile project. To wrap a Makefile project around your existing application:

  1. Start STWorkbench and specify a workspace directory in your home area, the location is unimportant. Once the dotfiles have been created, a project can be imported into any workspace.

  2. Select File > New > Project.

  3. In the C category, select C Project and click Next.

  4. Give the project a meaningful name.

  5. To leave the source tree in its original location, uncheck Use default location and enter the location of the tree in the Location field.

    Note: The wizard may warn that "Directory with specified name already exists!". This can be ignored.

  6. Select Makefile project and the appropriate toolchain.
  7. Click Next. The Select Configurations page appears. Click Advanced settings....

  8. To modify the make commands emitted when cleaning and building the project, select the C/C++ Build section:

  • The make command to be invoked and build directory can be set on the Builder settings tab.
  • The make targets can be set on the Behaviour tab.
  • To trigger a rebuild whenever a file is saved, check Build on resource save (Auto Build).
  • In the C/C++ Build > Settings section, check the ST Elf Parser. This is used to extract symbol information from binaries.

  • In the C/C++ Build > Discovery options section, change the Compiler invocation command to the name of your compiler, for example sh4-linux-gcc. This allows STWorkbench to locate the correct set of include files.

  • Click Finish. The project is created.

    Note: To change any of the settings, select Project > Properties.

  • Building the application

    When the project is built STWorkbench invokes make, calling the make targets specified in the Make Builder section of the new project wizard. The output of the build process is displayed in the Console view.

    To invoke more complex make commands, select Window > Show View > Other > Make > Make Targets to open the Make Targets view. This provides an interface for invoking make in any part of the project tree with any arguments. Make targets defined in this way are saved in the project for future use.

    For more information on working with Make Targets, see Driving an existing make system.

    Running and debugging the application

    Once an executable has been built in the project, it can be run and debugged.

    Note: Any executable that is enclosed in a project can be debugged; it is not necessary to develop and build it in STWorkbench.

    1. To run the executable, select Run > Run.... The Create, manage and run configurations dialog appears. The left-hand pane gives a list of launch configuration types.

    2. Highlight STLinux Target Application and click New . A launch configuration is created; give it a meaningful name.

    3. Select the Main tab. Fill in the Project and C/C++ Application fields.

    4. Select the Debugger tab. Select the ST40 Linux GDB Debugger. Fill in the Target name or IP address of a board that is currently running STLinux.

      Note: It is a prerequisite of running and debugging applications that the STLinux target board is booted, reachable on the network, and that the chosen user (probably root) is able to log into the target using ssh (For example: ssh root@target) without a password or any prompting.

    5. Click Run. The executable is copied to the target and executed. Its stdout and stderr streams appear in the Console view.

      Note: To run the application again, click Run > Run or click the Run toolbar button.

    6. To debug the application, click Run > Debug or click the Debug toolbar button. The launch configuration is reused. If prompted, move to the Debug perspective.

    The controls to debug the application are similar to those found in other GUIs. To set breakpoints, double-click the border of the source file. The Debug view has toolbar buttons to step, resume and terminate your application. It also displays thread lists and callstacks.

    The Debug view also contains two entries for consoles, indicated by an icon showing a PC tower and a green triangle. Click either of them to display it in the Console view. One is the console for your application, giving you access to stdin, stdout and stderr. The other is the console for gdb, where you can issue arbitrary gdb commands.

    Note: As no "(gdb)" prompt is displayed, it is not always clear when gdb is accepting input. Commands can only be entered when the application is suspended.

    For further information on how STWorkbench can be used to develop and debug large existing applications, see the following STWorkbench tutorials:

    Architecture: 

    Cross debugging with GDB

    $
    0
    0

    Cross debugging with GDB

    To set up a cross debug session using GDB, it is necessary to run debugging tools on both the target Linux system and the host system.

    1. On the target Linux system the GDB debug server, gdbserver needs to be told which port to use and which application to debug. Run gdbserver with the command:

      gdbserver localhost:<port> <application>
    2. where <port>; is the port number to use (chose a port that does not conflict with any other ports in use) and <application> is the application to be debugged. For example, to debug /root/hello using port 3278:

      target% gdbserver localhost:3278 /root/hello
      Process application created: pid = 184
      Listening on port 3278

      Note: To support symbolic debugging the application must have been compiled with the -g flag. This causes DWARF debugging information to be included in the executable. See the GDB documentation for details.

    3. On the host, run the appropriate debugger, for example sh4-linux-gdb, giving the name of the executable to be debugged as an argument. This is so that GDB can access its debug information.

    4. When the host debugger is running, connect to gdbserver using the target remote command:

      (gdb) target remote <targetip>:<port>
    5. In this command <targetip> specifies the target name or IP address, and <port> is the port number gdbserver is using.

      Using the example above, and supposing the target is 192.168.1.2:

      host% sh4-linux-gdb /opt/STM/STLinux2.3/devkit/sh4/target/root/hello
      GNU gdb STMicroelectronics/Linux Base 6.5-32[build Jul 222008]
      Copyright (C)2006 Free Software Foundation, Inc.
      <snip>
      This GDB was configured as "--host=i686-pc-linux-gnu --target=sh4-linux"... 
      (gdb) target remote 192.168.1.2:3278 
      Remote debugging using 192.168.1.2:3278 
      0x29558080 in ??()(gdb)
    6. Initially the program is stopped at its entry point in the C runtime, so the first step is to run to main:

      (gdb) break main
      Breakpoint 1 at 0x400656: file main.c, line 20. 
      (gdb) continue
      Continuing. 
      Breakpoint1, main() at main.c:2020 printf("Welcome to the application\n");
      (gdb)
    7. Note: To avoid having to enter these commands manually whenever GDB is invoked, they can be automated by creating a GDB startup script file. When GDB starts, it looks for a file named .shgdbinit in the home directory of the user, and in the current directory, and (if found) executes it. The user can specify an alternative startup script to be run, by starting GDB with the command line option --command=<script>.

    The target can now be debugged in the same way as a native application. For more information on how to use GDB, see the GDB documentation. GDB also contains extensive built-in help, accessable with the help command.

    Architecture: 

    Native debugging with GDB

    $
    0
    0

    Native debugging using GDB

    To set up a native debug session using GDB, it is only necessary to run the GDB debugger on the target.

    Note: To support symbolic debugging the application must have been compiled with the -g option passed to gcc. This adds DWARF debugging information to the executable.

    The GDB debugger on the target must be passed the name of the application to be debugged, so that it can access the debug information contained within the binary:

    target# gdb /root/hello
    GNU gdb 6.3
    Copyright 2004 Free Software Foundation, Inc. 
    <snip>
    This GDB was configured as "sh4-linux"...
    (gdb)

    The application can be run to main like this:

    (gdb) break main
    Breakpoint 1 at 0x400656: file main.c, line 20.
    (gdb) run
    Starting program: /root/hello
    Breakpoint 1, main() at main.c:2020 printf("Welcome to the application\n");
    (gdb)

    The application can be debugged in exactly the same way as it would be on a host machine. As before, refer to the built-in help and online documentation for more information.

    Architecture: 

    Kernel parameters

    $
    0
    0

    Kernel parameters

    After the kernel has been built with KGDB support, the command line port options as shown below:

    • ST ASC port:

      kgdbasc=0,115200

    • Ethernet connection:

      kgdboe=[src-port]@<src-ip>/[dev],[tgt-port]@<tgt-ip>/[tgt-macaddr]

      where:
      src-port (optional): source for UDP packets (defaults to 6443),
      src-ip: source IP to use (interface address),
      dev (optional): network interface (default is eth0),
      tgt-port (optional): port GDB will use (defaults to 6442),
      tgt-ip: IP address GDB will be connecting from,
      tgt-macaddr (optional): ethernet MAC address for logging agent (default is broadcast).

      Important note: src and tgt in this case are from the point of view of the test system. So src will be the test (or target) system and tgt is the development (or host) system.

    • kgdbwait: this makes KGDB wait for a GDB connection during booting of the kernel. If used it must be passed after the port options.
    Architecture: 

    Connecting GDB to the kernel

    $
    0
    0

    Connecting GDB to the kernel debugger

    The cross GDB for your architecture should be used, either sh4-linux-gdb or arm-linux-gdb.

    Before debugging, the debugger must connect to the KGDB using the chosen communication method. For example:

    • Using the serial port:
      % sh4-linux-gdb
      (gdb) file vmlinux
      (gdb) set remotebreak 1(gdb) set remotebaud 115200(gdb) target remote /dev/ttyS1
      (gdb) continue

    • Using KGDB ethernet support:
      % sh4-linux-gdb
      (gdb) file vmlinux
      (gdb) set remotebreak 0(gdb) target remote udp:<src-ip>:<src-port>
      (gdb) continue

    The kernel can then be debugged like any other application.

    Architecture: 

    Controlling kernel execution

    $
    0
    0

    Controlling kernel execution

    When debugging, it is possible to suspend and then resume kernel execution at any point.

    Suspending kernel execution

    • Using Ctrl+C

      Suspend kernel execution by pressing Ctrl+C on the GDB terminal. This signals KGDB to take control of the kernel and contact GDB.

    • Using Magic SysRq

      This enables the console device to interpret special characters as commands to dump state information or to invoke the kernel debugger.

      To add this support, enable the CONFIG_MAGIC_SYSRQ option during the configuration phase, in the section:

      Kernel hacking --->Magic SysRq  Key

      The debugger can be entered by sending the letter g to the /proc/sysrq-trigger file as shown below:

      target# echo "g" >  /proc/sysrq-trigger
      SysRq : GDB
      Entering GDB stub

    Continuing kernel execution

    To continue kernel execution, use the GDB command continue. This instructs the KGDB stub to resume running the kernel.

    Architecture: 

    Debugging kernel modules

    $
    0
    0

    Debugging the kernel modules

    GDB is able to detect when a module is loaded on the target. It then loads the module object file into the memory of GDB to get debugging information.

    The search path where GDB locates module files is set in the variable solib-search-path.

    1. On the target, load your module:
    2. target# insmod my_mod.ko
    3. Stop the kernel execution to confirm that the module is indeed loaded.
    4. On the host load the module symbols and set your breakpoints:
    5. host% sh4-linux-gdb vmlinux
      GNU gdb 6.3
      Copyright 2004 Free Software Foundation, Inc.
      GDB is free software, covered by the GNU General Public License, 
      and you are welcome to change it and/or distribute copies of it 
      under certain conditions.
      Type"show copying" to see the conditions.
      There is absolutely no warranty for GDB.  
      Type"show warranty" for details.
      This GDB was configured as "--host=i686-pc-linux-gnu --target=sh4-linux"...
      0x84031f10 in ?? () 
      Automatically enabled KGDB extensions...
      (gdb) set solib-search-path /user/my_module_2.6/
      Reading symbols from /user/my_module_2.6/my_mod.ko...
      expanding to full symbols...done.
      Loaded symbols for /user/my_module_2.6/my_mod.ko(gdb) info sharedlibrary
      From        To          Syms Read   Shared Object Library
      0xc0168000  0xc01680c0  Yes         /user/my_module_2.6/my_mod.ko(gdb) b mod_stm_open
      Breakpoint 1 at 0xc0168002: file /user/my_module_2.6/my_mod.c, line 43.
      (gdb) c
      Continuing.

    In order to locate the loadable module symbols, GDB must be connected to the target (where the modules have been loaded). This means that the command set solib-search-path /user/my_module_2.6/ must be run after GDB has connected to the target; otherwise it collects the symbols but it does not know where they are located.

    This issue can be resolved using the pending breakpoints.

    Problems may also occur when unloading modules if GDB still has breakpoints within the module. For example, if you unload a module (using the rmmod command), without removing all the relevant breakpoints in GDB, any attempt to start a new debug session (for example stopping the kernel execution with Ctrl+C) will cause the system to hang. This is because KGDB is being asked to perform actions on active breakpoints that are no longer accessible.

    Debugging module_init and module_exit functions

    The procedure to debug the module_init function from a module is simple.

    Before loading you module, you can set a pending breakpoint on the module_init function, as shown below. So you will be able to debug it as soon as the module has been loaded.

    (gdb)file vmlinux
    (gdb)set breakpoint pending on
    (gdb)set solib-search-path /user/my_module_2.6/
    (gdb)info sharedlibrary
    (gdb)b my_module_init

    Currently, this method only works if (in the C code) you write the module_init function without the __init attribute. In the same way, you must write the module_exit function without the __exit attribute.

    Configuring GDB to debug a module

    Debugging can be simplified by defining a personal set of GDB commands. This is done using the GDB define command in a file which GDB can read when it starts.

    GDB automatically executes commands from .shgdbinit. (GDB also uses this file for loading the Linux kernel image on the ST40 target.)

    set auto-solib-add 0 
    define mymod
      set solib-search-path /user/my_module_2.6/
      info sharedlibrary
    end
     
    define kgdbserial
    set remotebaud 115200
    target remote /dev/ttyS0
    end
     
    define kgdb_start
    set linux-awareness auto_activate 0
    file vmlinux
    set auto-solib-add 1
    kgdbserial
    end

    In the kgdb_start command all linux-awareness functionality must be disabled if we want to use the KGDB debug agent to debug the kernel. For more information about internal GDB commands, see the file: /opt/STM/STLinux-2.3/devkit/sh4/doc/stlinux23-cross-sh4-gdb-6.5/jtag_kernel_debug.txt

    Note: In any case, the developer can use a personal configuration file as shown below:

    host% sh4-linux-gdb --command .my_gdbinit
    Architecture: 

    Troubleshooting, known limits and utilities

    $
    0
    0

    Troubleshooting

    • GDB prints following errors:

    • Ingore packet error, continuing...
      Ingore packet error, continuing...
      Ingore packet error, continuing...
      Couldn't establish connection to remote target
      Malform response to offset query, timeout.
      • Using the serial port
        • Check whether the serial line speed given to KGDB is equal to the host serial speed. It is recommended to select the maximum speed supported by the port. For example, a baud rate of 115200 is recommended as lower rate. The minicom command can be use for setting the baud rate on to appropriate value (see below):
          minicom /dev/ttyS0

          To configure the serial port, press Control-A and then the function keys z and o.

      • Using the Ethernet support
        • Verify the kgdboe kernel command line option.
        • Check if you are using a wrong ethernet cable. The UTP network cables should be used if your target is connected to the network using a hub or switch. It is possible to connect the host and the target directly by using a crossover cable.

        Note 1: The ethernet support has been only tested connecting the development machine and the test machine at the same subnet.

        Note 2: If you still have problems and your network setup is correct, you could use, for example, the ethereal command (you need the root login) for browsing the network traffic.

    • A breakpoint does not get hit as expected: check whether you are using appropriate vmlinux file.
    • GDB is not able to get the symbols: check whether your kernel and modules are built with the debug option enabled.
    • GDB prints some invalid frames during the backtrace. This is beacuse GDB does not know where to stop a backtrace. GDB is not able to figure out the correct code line if it is in an assembly language file. At any rate, GDB is able to handle inline assembly code included in C files.

    Known limitations of KGDB

    • Hardware watchpoints/breakpoints are not yet supported.
    Architecture: 

    Building SRPMs

    $
    0
    0

    Building SRPMs

    Occasionally, it may be necessary to rebuild one of the supplied SRPMs. This is not a difficult operation, but it does rely on some features of rpm which may not be familiar to users who have only used this tool for installing binary RPMs.

    Note: The procedures specified on this page are applicable to any STLinux distribution, which means that the rpm filename and path are generic. The instructions below use stlinux2x as a generic distribution name and STLinux-X.X as a generic installation root directory. These should be substituted, as applicable, for the correct names of the 2.3 or 2.3 distributions. Similarly, the version numbers of tools are specified in the generic form x.y-z. The following example is intented for glibc related packages, the same could be performed for uclibc simply changing the relevant paths.

    All the macros needed for the build are specified in the stm-host-rpmconfig RPM. This is supplied with the distribution, but it is not installed by default so must be installed (as root) by:

    host# rpm -ihv stlinux2x-host-rpmconfig-x.y-z.noarch.rpm

    The rest of the process does not require root privileges, and does not modify the RPM database. It should be possible to work in the normal RPM build area, by removing the _topdir macro from localmacros. If for any reason, you do not want to do this you must cd to your working directory, and then create the build hierarchy:

    host% mkdir -p SOURCES SPECS BUILD SRPMS RPMS/{noarch,i386,sh4}

    There are options that must be given to rpm which can only be specified in configuration files. These control the root of the RPM build tree, and the location to search for macro definitions. Note that the localrc file contains just a single long line; the second command is shown here broken into a number of lines for clarity:

    host% echo "%_topdir	$(pwd)" > localmacroshost% echo "macrofiles: /usr/lib/rpm/macros:
                /opt/STM/STLinux-X.X/config/rpm/hosts/i686-pc-linux-gnu:
                /opt/STM/STLinux-X.X/config/rpm/targets/sh4-linux:
                /opt/STM/STLinux-X.X/config/rpm/common:
                `pwd`/localmacros" > localrc

    Next install the SRPM into this structure, for example:

    host% rpm --rcfile <path_to_localrc>/localrc --macros
              /usr/lib/rpm/macros:/opt/STM/STLinux-X.X/config/rpm/hosts/i686-pc-linux-gnu:
              /opt/STM/STLinux-X.X/config/rpm/targets/sh4-linux:
              /opt/STM/STLinux-X.X/config/rpm/common:<path_to_localmacros>/localmacros 
              -Uhv stlinux2x-target-util-linux-x.y-z.src.rpm

    substituting the name of the required SRPM.

    Note : Make sure that the PATH is correctly set. The directories /opt/STM/STLinux-2.X/host/bin and /opt/STM/STLinux-X.X/devkit/sh4/binmust be listed before all other directories.

    You can set that with the following command:

    host% export PATH=/opt/STM/STLinux-X.X/host/bin:/opt/STM/STLinux-X.X/devkit/sh4/bin:$PATH 

    You need to export the TARGET_RPMARCH to sh4-23- [or sh4-24-] for generating packages with correct requires or provides prefix. Also RPMPREFIX=/opt/STM/STLinux-2.X/devkit/sh4/target has to be set to allow dependency checker to correct work for target rpms. Leave it blank in case of rebuilding an host package. Note: 2.X stands for 2.3 or 2.4 as well.

    host% export TARGET_RPMARCH=sh4-2X- host% export RPMPREFIX=/opt/STM/STLinux-2.X/devkit/sh4/target

    Finally, build the package:

    host% rpmbuild --rcfile <path_to_localrc>/localrc --macros /usr/lib/rpm/macros:
               /opt/STM/STLinux-X.X/config/rpm/hosts/i686-pc-linux-gnu:
               /opt/STM/STLinux-X.X/config/rpm/targets/sh4-linux:
               /opt/STM/STLinux-X.X/config/rpm/common:<path_to_localmacros>/localmacros 
               -ba -v --target=sh4-linux <path_to_spec>/stm-target-util-linux.spec

    Please add real paths where required in the above examples.

    If the package is a host or cross utility, rather than a target package, remove the --target option.

    Note : If, for any reason, you intend to modify the spec file, be aware that commenting out a line that includes is a macro invocation (starting with a % character) may not have the intended outcome. rpm may still interpret these lines as macros despite the fact that they are commented out, causing problems that can be hard to debug. STMicroelectronics strongly suggest making a backup copy of the orginal file and then deleting any lines that are no longer needed. The lines can be restored from the backup if they need to be replaced.

    Architecture: 

    Debugging

    $
    0
    0

    Debugging

    This section covers a range of topics relating to debugging STLinux software. It includes information on debugging applications running in user space as well as debugging the kernel.

    Architecture: 

    Tracing and profiling

    $
    0
    0

    Introduction to Tracing and Profiling

    Tracing and Profiling are important aspects of software analysis. A software developer uses these techniques to improve the quality and efficiency of an application. STLinux provides a range of tracing and profiling tools; these are described and discussed on the following pages.

    Tracing is a technique where the activity of a program is logged so that it can be analyzed and any aberrant behavior detected and corrected. The level of detail that is logged depends upon the circumstances and can be controlled by the developer.

    STLinux provides two tracing tools: KPTrace and LTTng.

    There are also standard tools provided and supported by the community: ftrace and perf. A wiki on ftrace and perf usage is available here.

    Profiling is a technique of measuring the activity of a program in respect of the amount of resources that the program is consuming or the amount of time it is taking to execute. The most common use of profiling is for "performance tuning": to gather information to aid in program optimization.

    STLinux provides two profiling tools: gprof and OProfile.

    Architecture: 

    Building

    $
    0
    0

    Building

    STLinux is designed for embedded systems, which means that the majority of software development takes place on a host computer and the software is only transferred to the target device for final testing. This section gives information on using the GNU toolchain for cross-compilation of software before transferring it to the target.

    Architecture: 

    How to profile an STLinux system

    $
    0
    0

    Introduction to Profiling

    For users interested in tuning the performance of their applications, the STLinux distribution includes two profilers - gprof and OProfile.

    The standard GNU profiler, gprof, has two key limitations for embedded Linux development:

    • It will only profile a single user mode application.
    • It will only profile an entire run of that application, and requires it to exit.

    Many embedded applications are never intended to exit!

    OProfile addresses both of these issues. It is built into the kernel, and uses timer interrupts to profile the kernel and all user mode processes. The resulting profile can show the time spent in each function, process, thread, or binary. It can be started and stopped at any time and so does not require applications to complete.

    Using OProfile

    Before OProfile can be used, it must be configured into the kernel:

    Profiling support  --->
      Profiling support (EXPERIMENTAL) --->
        OProfile system profiling (EXPERIMENTAL)

    Of course, to get a symbol level profile of the kernel, it must be built with debug info:

    Kernel hacking  --->
      Kernel debugging --->
        Compile the kernel with debug info

    The OProfile support can either be built as a kernel module, or statically linked into the kernel. The rest of this document assumes that it has been statically linked.

    To profile the time spent in individual kernel functions, a copy of the kernel must be placed in the target filesystem. This can be done with scp.

    host% scp vmlinux root@targetboard:/root2

    OProfile is configured and driven using command-line utilities on the target system. The two most important of these are:

    • opcontrol - to configure, start and stop profiling.
    • opreport - to output the current profile in a variety of forms.

    To start profiling the system, use the following commands. They could be put in a script for convenience.

    target% rm -rf /var/lib/oprofile/target% opcontrol --inittarget% opcontrol --setup --separate=all --vmlinux=/root/vmlinuxtarget% opcontrol --start

    The first command deletes any samples from previous runs. If the run was stopped unexpectedly (for example by resetting the board), then these samples will prevent any new ones being collected.

    opcontrol --init creates the directory structure under /var and loads the OProfile kernel module where appropriate.

    In the opcontrol --setup line, the kernel image is specified (this can be omitted if you can do not want to profile the kernel at symbol level), and we specify that samples should be separated out by process, thread, binary image and cpu. By default samples are only separated by binary image.

    opcontrol --start begins collecting samples.

    When you have run the application(s) to be profiled, two commands are available to write out the samples to /var:

    opcontrol --dump

    opcontrol --stop

    The only difference is that after a dump, OProfile will continue to collect samples. opcontrol --stop stops profiling altogether.

    Displaying the results

    Once the samples have been collected, they can be viewed with opreport:

    target% opreport --merge=all
     CPU: CPU with timer interrupt, speed 0 MHz (estimated)
     Profiling through timer interrupt
               TIMER:0|
       samples|      %|
     ------------------
           96694.7988 vmlinux
            232.2571 bash
            181.7664 libc-2.3.3.so60.5888 ld-2.3.3.so20.1963 ls
             20.1963 sshd
             10.0981 gawk
             10.0981 grep

    This shows that over the profiled period, 94.8% of the time was spent in the kernel, 2.3% of the time in bash, 1.8% of the time in glibc...

    The option --merge is used to undo the separation of samples by process and thread. To break the profile down by symbol, use the option -l:

    target% opreport --merge=all -l
     CPU: CPU with timer interrupt, speed 0 MHz (estimated)
     Profiling through timer interrupt
     samples  %        app name                 symbol name
     1566050.3764  vmlinux                  cpu_idle
     1495448.1053  vmlinux                  default_idle
     210.0676  vmlinux                  __raw_readsl
     110.0354  libstdc++.so.6.0.3       anonymous symbol from section .plt90.0290  ld-2.3.3.so              do_lookup_x
     80.0257  sshd                     rijndael_encrypt
    <snip>

    This shows that just over 50% of all samples taken were in the kernel function cpu_idle.

    The output can be restricted to a single binary image like this:

    target% opreport --merge=all -l image:/`which sshd`
     CPU: CPU with timer interrupt, speed 0 MHz (estimated)
     Profiling through timer interrupt
     samples  %        app name                 symbol name
     137.2626  sshd                     rijndael_encrypt
     73.9106  sshd                     anonymous symbol from section .plt52.7933  libcrypto.so.0.9.6       md5_block_host_order
     52.7933  sshd                     rijndael_decrypt
     42.2346  libc-2.3.3.so            memcpy
    <snip>

    This shows how long sshd spent in each function it executed, including shared library and kernel functions.

    If the --merge option is removed, then the output is broken down by process and thread:

    target% opreport -l image:/root/thread_test
     CPU: CPU with timer interrupt, speed 0 MHz (estimated)
     Profiling through timer interrupt
     Processes with a thread ID of 1112
     Processes with a thread ID of 1139
     Processes with a thread ID of 1154
     samples % samples % samples % symbol name
     1100.0000000 _IO_new_file_overflow
     001100.00000 __udivsi3_i4
     00001100.000 clone
     <snip>

    In this display, each thread has two columns - the number of samples, and the percentage this represents of total samples. In the example above, one only sample was taken in each thread - thread 1112 was sampled once in _IO_new_file_overflow, thread 1139 was sampled once in __udivsi3_i4 and so on.

    Many more permutations of options are possible. As the last example shows, once the profile is broken down by thread the output can quickly become very complex and difficult to read! Care must be taken to specify the most appropriate report format for the information you are trying to extract.

    One final option that may be of interest:

    target% opgprof `which sshd`

    This writes out the profile of a single application in gprof format. It can then be viewed with gprof, or more importantly with a GUI that recognises that format such as kprof.

    Much more information on OProfile can be found on its project homepage, and in the manpages for the various command-line tools.

    Other useful tools

    OProfile is sample-based. To get an totally accurate breakdown of the time spent in each process, thread, or interrupt handler, consider using KPTrace. KPTrace also provides detailed figures on the number of times interrupts fired, the number of context switches, and so on.

    Both gprof and OProfile can be used from within STWorkbench, which provides a sophisticated graphical interface on the profile results. KPTrace also provides an excellent graphical trace viewer inside STWorkbench.

    Architecture: 

    Webcasts

    $
    0
    0

    Measuring CPU load with cyclesoak

    $
    0
    0

    Measuring CPU load with cyclesoak

    Although information on system idle time is available from oprofile and KPTrace, the simplest and most accurate way to measure the CPU load of an STLinux ST40 or ARM system is using cyclesoak.

    cyclesoak is a tool for measuring system resource utilisation. It uses a "subtractive" algorithm: it measures how much system capacity is still available, rather than how much is consumed. This makes it much more accurate than tools such as top, which simply attempt to add up the CPU usage of separate user processes.

    To use cyclesoak, it must first be calibrated. To do this, run it with the -C option on an unloaded system:

    root@typhoo:~# cyclesoak -C
    using 1 CPUs
    calibrating: 10561832 loops/sec
    calibrating: 10671135 loops/sec
    calibrating: 10668440 loops/sec
    calibrating: 10668936 loops/sec
    calibrated OK.  10668936 loops/sec
     
    root@typhoo:~#
    This works out how many spare cycles are present on a totally unloaded system. This should be done immediately after boot, before any additional modules are loaded, with no other applications running. The system load figures produced will be relative to the system load at this time. From then on, running cyclesoak with no arguments gives the system load:
    root@typhoo:~# cyclesoak
    using 1 CPUs
    System load:  0.9%
    System load: -0.0%
    System load: -0.0%
    System load: -0.1%
    System load: -0.0%
    System load: -0.1%
    System load: -0.1%
    System load:  0.1%
    System load: 75.7%
    System load: 21.2%
    System load: 11.8%
    System load: 36.1%
    System load: 62.0%
    System load: -0.0%
    System load:  0.0%
     
    root@typhoo:~#

    The tool continues to run until it is ended with a Ctrl+C. A value is output every second - this period is configurable with the -p (set period in seconds) and -m (set period in milliseconds) options.

    To measure the load of one particular application or subsystem, recalibrate cyclesoak with everything else running, then run that application or enable that subsystem. The load figures, being relative, will then indicate the load that has been added.

    The load figures produced by cyclesoak can also be written into the KPTrace trace buffer by running with the -k option. Because the output is then interleaved with the trace, this allows the load to be seen in the context of what was happening on the system at that time.

    Architecture: 

    Kernel memory leak checking

    $
    0
    0

    Usage

    From STLinux 2.3 kernel #122 onwards, the kernel contains a memory leak detection tool called kmemleak.

    To enable kernel memory leak checking turn on this kernel configuration option:

    Kernel Hacking -> Kernel memory leak detector

    and rebuild the kernel and all kernel modules. Boot the system.

    Detected memory leaks are reported through debugfs. To see the result, it is necessary to mount a debugfs filesystem on the target, e.g.

    target# mount -t debugfs nodev /sys/kernel/debug

    You may find it more convenient to mount it by default from /etc/fstab, by adding this line:

       debug           /sys/kernel/debug          debugfs defaults                        00

    A report of all potential memory leaks (defined as memory that has been dynamically allocated but to which no pointer remains in memory) can then be viewed in the file /debug/memleak:

    target# cat /sys/kernel/debug/memleaktarget# cat /sys/kernel/debug/memleak
    unreferenced object 0x86c3c9c0 (size 32):
      [<8400473e>] show_cpuinfo
      [<8400473e>] show_cpuinfo
      [<84076f5a>] seq_read
      [<8408d176>] proc_reg_read
      [<84076e6a>] seq_read
      [<8405d6f6>] vfs_read
      [<8405db62>] sys_read
      [<840078f8>] syscall_call
    target#

    Note that is necessary to cat the file twice before leaks appear - this allows the system to avoid unwanted "false positive" reports. This report shows that 32 bytes were allocated at the address 0x8400473e, in the function show_cpuinfo(), but there is no pointer left in memory that can be used to free them.

    Testing

    To test the tool, try adding this line:

        kmalloc(32, GFP_KERNEL);

    to the function show_cpuinfo() in the file arch/sh/kernel/setup.c. That function will be executed if you cat the file /proc/cpuinfo. Because the pointer return by kmalloc isn't stored anywhere, those 32 bytes have been leaked and will be reported as shown earlier.

    Post-processing the results

    The address at which the memory was allocated can be resolved to a line using the tool sh4-linux-addr2line:

    % sh4-linux-addr2line -e vmlinux 8400473e
    /scratch/smithc/git/linux-sh4-2.6.23.y/arch/sh/kernel/setup.c:396

    If the symbol names in the output are C++ mangled names, they can be decoded with the tool c++filt. The entire kmemleak output file can be post-processed with a simple script, such as this:

    #!/usr/bin/perl
     
    open (FILE, $ARGV[0]);
    @lines = <FILE>;
    close(FILE);
     
    foreach $line (@lines){
            if ($line =~ /\[/){
                    @bits = split(" ", $line);
                    print "  $bits[0]";
                    system("/usr/bin/c++filt $bits[1]");
            } else {
                    print $line
            }}

    Extra configuration options

    The kernel configuration allows various kmemleak settings to be adjusted, including the callstack depth (8 is the default) and the maximum number of leaks (100 by default).

    Architecture: 

    Uninstalling STLinux

    $
    0
    0

    Uninstalling STLinux

    STLinux distribution, is rpm package based. So, it is possible to manage any STLinux package by RPM standard commands. For more details, please refer to the RPM official site at: rpm.org.

    To simplify uninstall operations, STM provides a simple uninstall script; click here to download.

    It is mandatory to run this script with administrator privileges.
    It will uninstall only the specified STLinux distribution. The uninstall script have the following options/parameters:

    --arch: specify the architecture to remove (sh4, sh4_uclibc, arm) 
    --distro: specify the revision of the distribution (2.2, 2.3, 2.4) 
    --force: force to remove all the installation directories
     
    --dry-run: simulate the uninstall process but don't remove any package
     
    --silent: do not displays what is going on
     
    --debian: uninstall for debian like systems
     
    --help: help messages

    For example, to uninstall the STLinux24 sh4 distribution:
     
    host% ./uninstall.sh --arch sh4 --distro 2.4 
    It will remove all STLinux sh4 rpms packages.
    If there are multiple installations of STLinux distributions, (for example sh4 and sh4_uclibc) uninstall, will remove only the specified distribution. In this case, (more than one STLinux distribution) it is allowed remove them invoking the uninstaller more times. One time for each distribution to remove.
    Architecture: 
    Viewing all 30 articles
    Browse latest View live