Tuesday, October 8, 2024

The Shebang Challenge: Finding the Relative Path of Python and Running Your Script

Introduction

As a developer, you've likely encountered the Shebang line at the top of your Python scripts. But have you ever wondered what it does or how to use it effectively? In this blog post, we'll delve into the world of Shebang, explore its challenges, and provide solutions to find the relative path of Python and run your script seamlessly.

What is Shebang?

Shebang, also known as the hashbang, is a line of code that starts with #! followed by the path to the interpreter that should be used to run the script. It's typically placed at the top of a script file. The purpose of Shebang is to specify the interpreter that should execute the script.

Challenges with Shebang

While Shebang seems straightforward, there are some challenges associated with it:

1. Finding the Relative Path of Python

The Shebang line requires an absolute path to the Python interpreter. However, this path can vary depending on the system configuration and environment.

2. Cross-Platform Compatibility

Different operating systems have different conventions for specifying the interpreter path.

3. Virtual Environments

When using virtual environments, the Python interpreter path can change, making it difficult to maintain a consistent Shebang line.

Solutions

1. Using /usr/bin/env

One solution is to use /usr/bin/env instead of hardcoding the Python path:

```Python

#!/usr/bin/env python

This command searches for the Python interpreter in the system's PATH environment variable.

2. Virtual Environment Solution

When using virtual environments, create a wrapper script that activates the environment and runs the Python script:

```Bash

#!/bin/bash

source /path/to/venv/bin/activate

python /path/to/script.py

Best Practices

Use /usr/bin/env: This approach provides flexibility and cross-platform compatibility.

Avoid hardcoding paths: Instead, use relative paths or environment variables.

Test your script: Verify that your script runs correctly on different systems and environments.

3. Use below shebang in bin scripts if your GNU core utils version is 8.30 or above:

``` sh script

#!/usr/bin/env -S /bin/sh -c 'export LD_LIBRARY_PATH=`dirname $0`/../lib;"`dirname $0`/python3.11" "$@"'

For any lesser version of GNU core utils you can use this:

#!/bin/sh
"exec" "`dirname $0`/python" "$0" "$@"

See below link for more information.


Best additional Resource:

https://stackoverflow.com/questions/20095351/shebang-use-interpreter-relative-to-the-script-path 

Sunday, September 29, 2024

Enhancing LLDB with "search history" and "completion" Support: A Guide to Better Debugging

As developers, we often find ourselves deep in debugging sessions, relying heavily on tools like LLDB (Low Level Debugger). However, one common frustration with LLDB is its lack of built-in command history and advanced line editing features by default. In this post, we'll explore two methods to enhance LLDB's functionality: using rlwrap and compiling LLDB with native readline support.

Method 1: Using rlwrap

What is rlwrap?

rlwrap (readline wrapper) is a utility that adds readline-style editing and history capabilities to command line programs that don't have these features built-in. It's a quick and easy way to improve LLDB's usability without modifying the debugger itself.

Features of rlwrap:

  • Maintains a history of commands
  • Provides searchable history (use Ctrl-R to search backwards)
  • Allows line editing with arrow keys, home/end, etc.
  • Can save history between sessions
  • Offers tab completion (if supported by the wrapped program)

Setting up rlwrap with LLDB:

  1. Install rlwrap:
    • On macOS with Homebrew: brew install rlwrap
    • On Ubuntu/Debian: sudo apt-get install rlwrap
  2. Create an alias in your shell configuration file (e.g., ~/.bashrc, ~/.zshrc):
    alias lldb='rlwrap lldb'
  3. Reload your shell configuration or start a new terminal session.

Now, when you run lldb, you'll have access to command history, line editing, and other readline features.

Method 2: Compiling LLDB with Native Readline Support

For a more integrated solution, we can compile LLDB from source with readline support enabled. This method provides native readline functionality without relying on external wrappers.

Steps to Compile LLDB with Readline:

  1. Install prerequisites:
    sudo apt-get update sudo apt-get install -y cmake ninja-build libreadline-dev
  2. Clone the LLVM project (which includes LLDB):
    git clone https://github.com/llvm/llvm-project.git cd llvm-project
  3. Create a build directory and configure CMake:
    mkdir build && cd build cmake -G Ninja ../llvm \ -DLLVM_ENABLE_PROJECTS="clang;lldb" \ -DCMAKE_BUILD_TYPE=Release \ -DLLDB_ENABLE_LIBEDIT=OFF \ -DLLDB_ENABLE_CURSES=OFF \ -DLLDB_ENABLE_READLINE=ON \ -DCMAKE_INSTALL_PREFIX=/usr/local
  4. Build and install LLDB:
    ninja lldb sudo ninja install-lldb

This process will give you an LLDB binary with native readline support, providing features like searchable history and improved line editing.

Considerations

  • The rlwrap method is quick and doesn't require recompiling LLDB, but it's an external wrapper and may not integrate as seamlessly.
  • Compiling LLDB from source provides a more integrated solution but requires more time, disk space, and technical knowledge.
  • Building LLDB replaces your system's LLDB with a custom version. Make sure to back up your current installation first.

Final words

Both methods significantly improve LLDB's usability by adding crucial features like command history and advanced line editing. The choice between using rlwrap or compiling with readline support depends on your specific needs and comfort level with building software from source.

By enhancing LLDB with these readline capabilities, you can make your debugging sessions more efficient and enjoyable. Happy debugging!

Updating linked library name in an ELF binary(without recompiling)

 To update the linked library in your ELF executable without recompiling, you can use the patchelf tool. This tool allows you to modify the dynamic linker and RPATH of ELF executables. Here's how you can do it:

  1. First, make sure you have patchelf installed. On most Linux distributions, you can install it using your package manager. For example, on Ubuntu or Debian:

    sudo apt-get install patchelf
  2. Once installed, you can use patchelf to change the linked library. The command will look like this:

    patchelf --replace-needed libffi.so.6 libffi-mine.so.6 your_executable
    Replace your_executable with the actual name of your ELF binary.
  3. After running this command, you can verify the change using the ldd command:

    ldd your_executable
    This should show that your executable is now linked against libffi-mine.so.6 instead of libffi.so.6.
  4. If libffi-mine.so.6 is not in the standard library search path, you may need to add its location to the RPATH of your executable:

    patchelf --set-rpath /path/to/library/directory your_executable
    Replace /path/to/library/directory with the actual path where libffi-snps.so.6 is located.

Remember to make a backup of your original executable before making these changes. Also, ensure that libffi-snps.so.6 is compatible with your executable, as changing the linked library can lead to runtime issues if the new library is not compatible.

This method allows you to change the linked library without recompiling, but it's generally safer to recompile with the correct library if possible, as it ensures complete compatibility.

Tuesday, September 17, 2024

A python script to print ARM cpuinfo



Here's a Python program that fetches ARM CPU features from /proc/cpuinfo and prints the corresponding ARMv extension:

https://github.com/rednaveen/aarch64_cpu_info

A comprahensive list of ARM CPUs which shows arm v8 and v9 extensions supported by ARM CPUs. This link provides list according to /proc/cpuinfo on linux.

https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html 


The below link does not provide the extension codes used by ARMv8 but it gives details straight from ARM site:

https://developer.arm.com/documentation/102378/0201/Armv8-x-A-and-the-SBSA

Monday, September 16, 2024

Deploying prebuilt Clang/LLVM compiler tool chain


At below location there are few set of lines that needs to be added.

https://apt.llvm.org/

Like for example:

deb http://apt.llvm.org/jammy/ llvm-toolchain-jammy-18 main
deb-src http://apt.llvm.org/jammy/ llvm-toolchain-jammy-18 main
This needs to be added to /etc/apt/source.list
After adding update.
# LLVM
apt-get install libllvm-18-ocaml-dev libllvm18 llvm-18 llvm-18-dev llvm-18-doc llvm-18-examples llvm-18-runtime
# Clang and co
apt-get install clang-18 clang-tools-18 clang-18-doc libclang-common-18-dev libclang-18-dev libclang1-18 clang-format-18 python3-clang-18 clangd-18 clang-tidy-18
# compiler-rt
apt-get install libclang-rt-18-dev
# polly
apt-get install libpolly-18-dev
# lldb
apt-get install lldb-18
# lld (linker)
apt-get install lld-18
# libc++
apt-get install libc++-18-dev libc++abi-18-dev
# OpenMP
apt-get install libomp-18-dev
# libclc
apt-get install libclc-18-dev
# libunwind
apt-get install libunwind-18-dev
# mlir
apt-get install libmlir-18-dev mlir-18-tools
# bolt
apt-get install libbolt-18-dev bolt-18
# flang
apt-get install flang-18
# wasm support
apt-get install libclang-rt-18-dev-wasm32 libclang-rt-18-dev-wasm64 libc++-18-dev-wasm32 libc++abi-18-dev-wasm32 libclang-rt-18-dev-wasm32 libclang-rt-18-dev-wasm64

References: blog

Saturday, August 3, 2024

 

Understanding GCC and Clang Compiler Drivers

Compiler Driver Overview

A compiler driver is a critical component in the compilation process. It manages the sequence of steps required to transform source code into executable binaries. This involves invoking different tools for preprocessing, compiling, assembling, and linking. Two prominent compiler drivers are GCC (GNU Compiler Collection) and Clang, which are widely used in the software development industry.

GCC Compiler Driver

The GCC (GNU Compiler Collection) is a comprehensive compiler system that supports various programming languages, including C, C++, and Fortran. The gcc program serves as a compiler driver in the GCC system, orchestrating the different stages of compilation.

Key Components of GCC

  1. Preprocessor: The preprocessor (e.g., cpp) handles macro substitution, file inclusion, and conditional compilation.
  2. Compiler: The actual compilation is performed by cc1 for C code and cc1plus for C++ code.
  3. Assembler: The assembler (e.g., as) translates the assembly code generated by the compiler into machine code.
  4. Linker: The linker (e.g., collect2) combines object files and libraries into a single executable.

GCC Spec Strings

The behavior of the gcc compiler driver is controlled by spec strings, defined in a plain-text spec file. These spec strings specify how to construct the command lines for the various stages of the compilation process.

You can examine the built-in spec file using the command:

sh

gcc -dumpspecs

This command outputs the default spec strings used by gcc, providing insight into how the compiler driver orchestrates the different tools.

Using GCC

Here's an example of using gcc to compile a simple C program:



gcc -o myprogram myprogram.c

This command invokes the preprocessor, compiler, assembler, and linker in sequence to produce an executable named myprogram.

For more advanced usage, you can specify different options to control each stage of the compilation process. For instance, to produce an assembly file instead of an executable, you can use:



gcc -S myprogram.c

Clang Compiler Driver

Clang is another widely used compiler driver, part of the LLVM project. It aims to provide fast and user-friendly compilation while maintaining compatibility with GCC.

Key Components of Clang

  1. Preprocessor: Similar to GCC, Clang uses a preprocessor to handle macros, file inclusion, and conditional compilation.
  2. Compiler: The core compiler transforms source code into intermediate representation (IR).
  3. Assembler: Clang uses LLVM's assembler to convert IR into machine code.
  4. Linker: The linker combines object files and libraries into a final executable.

Using Clang

Clang provides a clang program as its compiler driver, which mimics the behavior of gcc while offering additional features and improved diagnostics.

Here's an example of using clang to compile a simple C program:



clang -o myprogram myprogram.c

This command follows a similar process as gcc, invoking the necessary tools to produce an executable.

For advanced usage, Clang offers a variety of options to control each stage of the compilation. For example, to generate an intermediate representation (IR) file, you can use:



clang -emit-llvm -o myprogram.ll myprogram.c

Comparing GCC and Clang

While both GCC and Clang serve as powerful compiler drivers, there are some differences worth noting:

  1. Performance: Clang is often praised for its faster compilation times and more informative error messages.
  2. Compatibility: GCC has been around longer and may have broader support for various architectures and platforms.
  3. Licensing: GCC is released under the GPL license, while Clang uses the permissive University of Illinois/NCSA Open Source License.

Conclusion

Understanding the intricacies of compiler drivers like GCC and Clang is essential for effective software development. Both tools provide robust features and options to control the compilation process, catering to a wide range of programming needs. Whether you choose GCC or Clang, having a solid grasp of how these compiler drivers work will enhance your ability to optimize and troubleshoot your code.

Notable blog references:

https://maskray.me/blog/2021-03-28-compiler-driver-and-cross-compilation

Using musl for Better Linux Compatibility

 

Using musl for Better Linux Compatibility

If you're a developer aiming to create portable Linux applications, you've likely encountered compatibility issues across different distributions. One solution to this problem is using musl, a lightweight alternative to the GNU C Library (glibc). In this post, we'll explore how musl can help achieve better compatibility and discuss its integration with popular libraries and linkers.

What is musl?

Musl is an implementation of the C standard library intended for use on Linux-based operating systems. It's designed to be lightweight, fast, and simple, while still being compatible with a wide range of existing software.

Benefits of using musl

  1. Smaller binary sizes: Musl-linked executables are often significantly smaller than their glibc counterparts.
  2. Static linking: Musl makes it easier to create statically linked executables, which can run on any Linux system without dependency issues.
  3. Consistency: Musl's behavior is more consistent across different architectures and Linux distributions.

Using musl in your projects

To use musl, you'll need to compile your code with a musl-based toolchain. Many Linux distributions offer musl-based versions of their package sets, or you can use tools like musl-gcc to compile your code against musl.

Here's a basic example of compiling a simple C program with musl:

bash

musl-gcc -static example.c -o example

This command will produce a statically linked executable that should run on any Linux system, regardless of the installed libc version.

Compatibility with larger libraries

One common concern when considering musl is its compatibility with larger, more complex libraries. The good news is that many popular libraries can indeed be used with musl, although some may require additional configuration or patches.

Boost

Boost, a collection of C++ libraries, can be compiled and used with musl. However, you may need to make some adjustments to your build process. Some Boost libraries, particularly those that interact closely with the system (like Boost.System or Boost.Filesystem), may require small patches or configuration changes.

Qt

Qt, the popular C++ framework for developing graphical user interfaces, can also be used with musl. However, building Qt with musl support requires some extra steps:

  1. Configure Qt with the -static flag to create a statically linked build.
  2. Use a musl-based toolchain for compilation.
  3. Disable certain features that may not be fully compatible with musl (e.g., some parts of QtNetwork).

While it's possible to use Qt with musl, be prepared for a more complex build process and potential compatibility issues with some Qt modules.

Linkers and musl

Musl is compatible with various linkers, including the LLD mentioned:

LLD (LLVM Linker)

LLD, the linker from the LLVM project, can be used with musl without any significant issues. In fact, using LLD can lead to faster link times compared to the default GNU linker (ld).

To use LLD with musl, you can pass the -fuse-ld=lld flag to your compiler:

bash

musl-gcc -fuse-ld=lld -static example.c -o example

Gold

Gold, the GNU linker developed as part of the Google Native Client project, is also compatible with musl. Like LLD, it can offer faster linking times than the traditional GNU linker.

To use Gold with musl, you can pass the -fuse-ld=gold flag to your compiler:

bash

musl-gcc -fuse-ld=gold -static example.c -o example

Installing musl on Red Hat Enterprise Linux 8.x

Good news for Red Hat Enterprise Linux (RHEL) users: musl is indeed available as part of the EPEL (Extra Packages for Enterprise Linux) repository. Here's how you can install it on RHEL 8.x:

  1. First, ensure that you have the EPEL repository enabled on your system. If you haven't already done so, you can enable it by running:
    bash

    sudo dnf install epel-release
  2. Once EPEL is enabled, you can install musl using the following command:
    bash

    sudo dnf install musl-devel musl-gcc
    This will install both the musl development files and the musl-gcc wrapper, which allows you to easily compile programs against musl.
  3. After installation, you can verify that musl is installed correctly by checking its version:
    bash

    musl-gcc --version
    This should display version information for both musl-gcc and the underlying GCC compiler.

With musl installed from EPEL, you can now use it to compile your programs on RHEL 8.x. For example:

bash

musl-gcc -static your_program.c -o your_program

This will compile your program using musl instead of the system's default glibc.

Remember that while musl is available in EPEL, some other tools or libraries you might need for your development process may not be. Always check the availability of all required packages in EPEL or other repositories when planning your development environment on RHEL.

Conclusion

Using musl can significantly improve the portability and compatibility of your Linux applications. While it may require some adjustments to your build process, especially when working with larger libraries like Boost or Qt, the benefits in terms of binary size and consistency across distributions can be substantial.

Remember that while musl aims for broad compatibility, you may still encounter some libraries or system calls that are not fully supported. Always thoroughly test your applications on various target systems to ensure compatibility.

By leveraging musl along with compatible linkers like LLD or Gold, you can create efficient, portable Linux applications that run consistently across a wide range of distributions.

Achieving GLIBC Independence for Better Linux Compatibility

Achieving GLIBC Independence for Better Linux Compatibility

In the diverse ecosystem of Linux distributions, one of the most common challenges for software developers is ensuring their products work seamlessly across different environments. A major hurdle in this quest for compatibility is the GNU C Library (GLIBC) dependency. In this post, we'll explore strategies to make your products more GLIBC-independent, allowing them to function across a wider range of Linux distributions with minimal issues.

Understanding the GLIBC Challenge

GLIBC, the GNU implementation of the C standard library, is a core component of most Linux systems. However, different distributions may use different versions of GLIBC, leading to compatibility issues when running software compiled against newer GLIBC versions on systems with older versions.

Strategies for GLIBC Independence

1. Static Linking

One approach to achieve GLIBC independence is to statically link your application with the required libraries. This ensures that your application carries all necessary dependencies within itself.

Pros:

  • Guaranteed compatibility across systems
  • No external library dependencies

Cons:

  • Larger binary size
  • Cannot benefit from system-wide security updates to libraries

2. Use of Alternative C Libraries

Consider using alternative C libraries like musl or uClibc. These libraries are designed to be lightweight and more portable.

Pros:

  • Smaller binary size
  • Often more compatible across different systems

Cons:

  • May lack some GLIBC-specific features
  • Potential compatibility issues with some third-party libraries

3. Containerization

Utilizing container technologies like Docker can isolate your application and its dependencies from the host system.

Pros:

  • Consistent runtime environment across different systems
  • Easier dependency management

Cons:

  • Overhead of running a container
  • May not be suitable for all deployment scenarios

4. Compile on Older Systems

Compile your application on a system with an older GLIBC version. This ensures compatibility with that version and all newer versions.

Pros:

  • Wide compatibility range
  • No need for special techniques or alternative libraries

Cons:

  • May miss out on newer GLIBC features and optimizations
  • Requires maintaining older build environments

5. Symbol Versioning

Use symbol versioning to specify which version of GLIBC symbols your application uses.

Pros:

  • Fine-grained control over library compatibility
  • Can use newer features while maintaining backwards compatibility

Cons:

  • Requires careful management of symbol versions
  • Can be complex to implement correctly

Best Practices

  1. Minimal Dependencies: Reduce reliance on external libraries where possible.
  2. Compatibility Testing: Regularly test your software on various distributions and GLIBC versions.
  3. Clear Documentation: Clearly communicate GLIBC version requirements and compatibility information.
  4. Continuous Integration: Implement CI/CD pipelines that test on multiple Linux environments.

Conclusion

Achieving GLIBC independence is a balancing act between compatibility, performance, and maintainability. By employing these strategies and best practices, you can significantly improve your product's ability to run across a wide range of Linux distributions, enhancing user experience and broadening your potential user base.

Remember, the best approach often depends on your specific use case, target audience, and deployment scenarios. Carefully consider the trade-offs of each method and choose the one that best aligns with your project's goals and constraints.

Friday, July 19, 2024

Capture radio signal with basic eletronics

 Have you ever wondered how your car's keyless entry system works? In this blog post, we'll explore how to capture and analyze the radio signals sent by your car's remote lock using some basic electronic components. This project is perfect for hobbyists and those interested in learning more about radio frequency (RF) communication.

Disclaimer

Before we begin, it's important to note that this project is for educational purposes only. Attempting to bypass or replicate car security systems may be illegal and could potentially damage your vehicle. Always respect others' property and privacy.

Understanding the Basics

Most car remote controls operate on frequencies between 315 MHz and 433 MHz. They use a rolling code system to prevent replay attacks, where someone could record and replay your signal to unlock your car.

What You'll Need

  1. Software-Defined Radio (SDR) dongle (e.g., RTL-SDR)
  2. Antenna (telescopic or custom-made for the specific frequency)
  3. Computer with SDR software installed (e.g., SDR#, GNU Radio)
  4. Car remote control

Steps to Capture the Signal

  1. Set Up Your SDR: Connect your SDR dongle to your computer and install the necessary drivers and software.
  2. Configure Your Software: Open your SDR software and set the frequency to the range used by car remotes (try 433 MHz to start).
  3. Prepare for Capture: Set your software to display the waterfall diagram, which shows signal strength over time and frequency.
  4. Capture the Signal: Press a button on your car remote while watching the waterfall diagram. You should see a brief spike in signal strength.
  5. Analyze the Signal: Use your software's analysis tools to examine the captured signal. You may be able to see the modulation type (often ASK or FSK) and the data being transmitted.
  6. Decode the Signal: For advanced users, tools like Universal Radio Hacker can help decode the signal structure.

Understanding What You've Captured

The signal you've captured likely contains:

  • A preamble (to synchronize the receiver)
  • A unique identifier for your specific remote
  • The command (lock, unlock, etc.)
  • A rolling code element

Here is a video I found on facebook reels that may demonstrate it with basic eletronics.

Ethical Considerations and Next Steps

Remember, the goal here is to learn about RF communication, not to bypass security systems. Some interesting next steps could include:

  • Learning about different modulation types used in RF communication
  • Studying rolling code algorithms and how they enhance security
  • Exploring other applications of SDR technology

By understanding how these systems work, we can appreciate the technology in our everyday lives and potentially contribute to making these systems more secure in the future.

Happy signal hunting!