Tag Archive : architecture

/ architecture

Preboot eXecution Environment or PXE (also known as Pre-Execution Environment) sometimes pronounced as pixie, is an environment to boot computer using network interface independently of data storage device (like hard disks) or installed operating systems. This kind of method is used as base of Diskless Node.

PXE makes use of several network protocols: Internet Protocol, User Datagram Protocol, Dynamic Host Configuration Protocol (DHCP), and Trivial File Transfer Protocol (TFTP) and also use concept of Globally Unique Identifier (GUID), Universally Unique Identifier (UUID), and Universal Network Device Interface. PXE will extends the firmware of the PXE client (the computer to be bootstrapped via PXE) with a set of predefined Application Programming Interface (API).


In normal boot process, after powering the machine a client will go to a BIOS and execute bootstrap program on HDD or CD\DVD. When using a PXE, the boot process is changed. After the computer is powered on, it will go to BIOS and then use the Network Card’s PXE stack. After that, the client will execute following procedure.

  1. The client firmware locate a PXE redirection service on the network (Proxy DHCP) in order to receive information about available PXE boot servers.
  2. Client parsing the information retrieved
  3. Client ask an appropriate boot server for the file path of a Network Bootstrap Program (NBP)
  4. Client download the required image and load to RAM (using TFTP access)
  5. Client execute the image

In short, The NBP is responsible for the 2nd stage boot.

What You Need?

As you might think, the PXE utilize network communication. In detail, it can be considered as one of client-server architecture system. You need a server which provides everything you need, and a client which send a request to server.

On Server Side

A server machine has to be configured to receive a request for PXE boot.

  1. DHCP
  2. TFTP
  3. appropriate NBP

On Client Side

A client machine should supports PXE booting. You should be able to enable it.

The Architecture of PlayStation 1

December 9, 2015 | Article | No Comments

PlayStation, or PlayStation 1 (abbreviate as PS1), is the first generation of home video game console made by Sony Coputer Entertainment.

This article will discuss about PlayStation architecture and some important aspects.

General Specification

PlayStation features ability to read and play audio CDs or Video CDs. The CD player has the ability to shuffle the playback order, play the songs in a programmed order, and repeat one song or the entire disk. PS1 doesn’t have internal storage in which it use external memory card to record data. Memory card is managed by Memory Card Manager which can be accessed by starting the console without inserting a game or keeping the CD tray open.

PlayStation 1 support two slot for wired controller and two slot for memory card.

The Central Processing Unit

Sony PlayStation employ MIPS R3000A compatible 32-bit RISC chip running at 33.8688MHz. The feature of the chip:

  1. Operating performance of 30 MIPS (Million Instructions Per Second)
  2. Bus bandwidth 132MB/s
  3. 4kB instruction Cache
  4. 1kB non-associative SRAM Data Cache
  5. 2 MB of RAM (integrated)

Geometry transformation engine employed by the CPU give additional vector math instructions used for 3D graphics. The features:

  1. Operating performance of 66 MIPS (Million Instructions Per Second)
  2. 360,000 polygons per second
  3. 180.000 texture mapped and light-sourced polygons per second

Inside the CPU also resides MDEC which responsible for decompressing images and video. It reads three RLE (Run Length Encoding) encoded 16×16 macroblocks, run IDCT and assemble a single 16×16 RGB macroblock. The output data may be transferred directly to GPU via DMA (Direct Memory Access). The features:

  1. Compatible with MJPEG and H.261 files
  2. Operating performance of 80 MIPS (Million Instructions Per Second)
  3. Directly conneced to CPU Bus

Graphics Processing Unit

The GPU handles 2D graphics processing separate from the main 3D engine on CPU. It features:

  1. Maximum of 16.7 million colors (24-bit color depth)
  2. Resolution from 256×224 to 640×480
  3. Adjustable frame buffer
  4. Unlimited color lookup tables
  5. Emulation of simultaneous backgrounds (for parallax scrolling)
  6. Flat or Gouraud shading and texture mapping
  7. 1 MB of VRAM

Sound Processing Unit

The SPU supports ADPCM (Adaptive Differential Pulse-code Modulation) sources with up to 24 channels. The sampling rate of up to 44.1 kHz and having 512 kB of memory.


The drive is a tray with XA Mode 2 Compliant. It use CD-DA (CD-Digital Audio) and use 128 kB buffer with maximum data throughput reach 300 kB/s


PlayStation has AV Multi Out. As PlayStation has numerous variants during its production, the hardware configuration especially connectivity might vary.

For SCPH-100x to SCPH-3xxx, PlayStation has RCA Composite video and Stereo out. It also has RFU DC Out.

The older SCPH-1000 has S-Video out.

The goal of every programmer is producing a good program. Nearly all programmers strive to sharpen their skill in coding and engineering stuff. But programming is not only matters of code.

A good program is a program which is not only having good response and robust. But we should not forget that other factors are important for a project.

In this article, I list some characteristic of good program design according to me.

Minimal Complexity

The main goal in any program should be to minimize complexity.  As a developer, our time will be maintaining or upgrading existing code. If it is a complete mess, then your life is going to be that much harder. Try and avoid those solutions where you use a one line complex solution to replace 20 lines of easy to read code. In a year, when you come back to that code, it will take you that much longer to figure out what you did.

The best practice of it will be structuration of code and following some patterns. A modularity is also good for design, don’t write all codes into a single source file.

Ease of Maintenance

This is making your code easy to update. Find where your code is most likely going to change, and make it easy to update. The easier you make it, the easier your life is going to be down the road.  Think of it as a little insurance for your code.

Loose Coupling

So What is loose coupling? It is when one portion of code is not dependant on another to run properly. It is bundling code into nice little self reliant packages that don’t rely on any outside code.  How do you do this? Make good use of abstraction and information hiding.


This means that you design your program so that you can add or remove elements from your program without disturbing the underlying structure of the program. A good example would be a plug-in.


Write code that will be able to be used in unrelated projects.  Save yourself some time. Again information hiding is your best bet for making this happen.

High Fan-in

This refers to having a large number of classes that use a given class. This implies that you are making good use of utility classes. For example you might have a bunch of classes that use the Math class to do calculations.

Low to Medium Fan-out

This refers to having a low to medium amount of classes used by any given class.  If you had a class that includes 7 or more classes this is considered high fan out.  So try and keep the number of classes you include down to a minimum.  Having a high fan-out suggests that the design may be too complex.


Simply put, design a system that can be moved to another environment.  This isn’t always a requirement of a program, but it should be considered. It might make your life easier if you find out your program does have to work on different platforms.


Leanness means making the design with no extra parts.  Everything that is within the design has to be there.  This is generally a goal if you have speed and efficiency in mind.  A good example of where this might come in handy is creating a program that has to run on a system with limited resources (cell phone, older computers)

Standard Technique

Try and standardize your code.  If each developer puts in their own flavor of code you will end up with an unwieldy mess. Try to layout common approaches for developers to follow, and it will give your code a sense of familiarity for all developers working on it.

Linux Kernel Source & Versioning

December 7, 2015 | Article | No Comments

Kernel Versioning

Anyone can build Linux kernel. Linux Kernel is provided freely on http://www.kernel.org/. From the earlier version until the latest version are available. Kernel is release regularly and use a versioning system to distinguish earlier and later kernel. To know Linux Kernel version, a simple command uname can be used. For example, I invoke this and got message

# uname -r

At that command output, you can see dotted decimal string 3.7.8. This is the linux kernel version. In this dotted decimal string, the first value 3 denotes major relase number. Second number 7 denotes minor release and the third value 8 is called the revision number. The major release combined with minor release is called the kernel series. Thus, I use kernel 3.7

Another string after 3.7.8 is gnx-z30a. I’m using a self-compiled kernel and add -gnx-z30a as a signature of my kernel version. Some distribution also gives their signature after the kernel, such as Ubuntu, Fedore, Red Hat, etc.

An example of building kernel can be read at this article.

Kernel Source Exploration

For building the linux kernel , you will need latest or any other stable kernel sources . For example we have taken the sources of stable kernel release version 3.8.2 . Different versions of Linux Kernel sources can be found at  http://www.kernel.org . Get latest or any stable release of kernel sources from there.

Assuming you have download the stable kernel release source on your machine, extract the source and put it to /usr/src directory.

Most of the kernel source is written in C .It is organized in various directories and subdirectories . Each directory is named after what it contains .  Directory structure of kernel may look like the below diagram.

Linux Kernel Source

Know let’s dive more into each directories.


Linux kernel can be installed on a handheld device to huge servers. It supports intel, alpha, mips, arm, sparc processor architectures . This ‘arch’ directory further contains subdirectories for a specific processor architecture. Each subdirectory contains the architecture dependent code. For example , for a PC , code will be under arch/i386 directory , for arm processor , code will be under arch/arm/arm64 directory etc.


LILO or linux loader loads the kernel into memory and then control is passed to an assembler routine, arch/x86/kernel/head_x.S. This routine is responsible for hardware initialization , and hence it is architecture specific. Once hardware initialization is done , control is passed to start_kernel() routine that is defined in init/main.c . This routine is analogous to main() function in any ‘C’ program , it’s the starting point of kernel code . After the architecture specific setup is done , the kernel initialization starts and this kernel initialization code is kept under init directory. The code under this directory is responsible for proper kernel initialization that includes initialization of page addresses, scheduler, trap, irq, signals, timer, console etc.. The code under this directory is also responsible for processing the boot time command line arguments.


This directory contains source code of different encryption algorithms , e.g. md5,sha1,blowfish,serpent and many more . All these algorithms are implemented as kernel modules . They can be loaded and unloaded at run time . We will talk about kernel modules in subsequent chapters.


This directory contains documentation of kernel sources.


If we understand the device driver code , it is splitted into two parts. One part communicates with user, takes commands from user , displays output to user etc. The other part communicates with the device, for example controlling the device , sending or receiving commands to and from the device etc. The part of the device driver that communicates with user is hardware independent and resides under this ‘drivers’ directory. This directory contains source code of various device drivers. Device drivers are implemented as kernel modules. As a matter of fact, majority of the linux kernel code is composed of the device drivers code , so majority of our discussion too will roam around device drivers.

This directory is further divided into subdirectories depending on the device’s driver code it contains.

contains drivers for block devices,e.g. hard disks.
contains drivers for proprietary cd-rom drives.
contains drivers for character devices , e.g. – terminals, serial port, mouse etc.
contains isdn drivers.
contains drivers for network cards.
contains drivers for pci bus access and control.
contains drivers for scsi devices.
contains drivers for ide devices
contains drivers for various soundcards.

Another part of a device driver, that communicates with the device is hardware dependent, more specifically bus dependent. It is dependent on the type of bus which device uses for the communication. This bus specific code resides under the arch/ directory


Linux has got support for lot of file systems, e.g. ext2,ext3, fat, vfat,ntfs, nfs,jffs and more. All the source code for these different file systems supported is given in this directory under file system specific sudirectory,e.g. fs/ext2, fs/ext3 etc.

Also, linux provides a virtual file system(VFS) that acts like a wrapper to these different file systems . Linux virtual file system interface enables the user to use different file systems under one single root ( ‘/’) . Code for vfs also resides here. Data structures related to vfs are defined in include/linux/fs.h. Please take a note , it is very important header file for kernel development.


This is one of the most important directories in kernel. This directory contains the generic code for kernel subsystem i.e. code for system calls , timers, schedulers, DMA , interrupt handling and signal handling. The architecture specific kernel code is kept under arch/*/kernel.


Along with the kernel/ directory this include/ directory also is very important for kernel development .It includes generic kernel headers . This directory too contains many subdirectories . Each subdirectory contains the architecture specific header files .


Code for all three System V IPCs(semaphores, shared memory, message queues) resides here.


Kernel’s library code is kept under this directory. The architecture specific library’s code resides under arch/*/lib.


This too is very important directory for kernel development perspective. It contains generic code for memory management and virtual memory subsystem. Again, the architecture specific code is in arch/*/mm/ directory. This part of kernel code is responsible for requesting/releasing memory, paging, page fault handling, memory mapping, different caches etc.


The code for kernel’s networking subsystem resides here. It includes code for various protocols like ,TCP/IP, ARP, Ethernet, ATM, Bluetooth etc. . It includes socket implementation too , quite interesting directory to look into for networking geeks.


This directory includes kernel build and configuration subsystem. This directory has scripts and code that is used to configure and build kernel.


This directory includes security functions and SELinux code, implemented as kernel modules.


This directory includes code for sound subsystem.


When the kernel is compiled , lot of code is compiled as modules which will be added later to kernel image at runtime. This directory holds all those modules. It will be empty until the kernel is built at least once.

Apart from these important directories , also there are few files under the root of kernel sources.

  • COPYING – Copyright and licensing (GNU GPL v2).
  • CREDITS – partial credits-file of people that have contributed to the Linux project.
  • MAINTAINERS – List of maintainers who maintain kernel subsystems and drivers. It also describes how to submit kernel changes.
  • Makefile – Kernel’s main or root makefile.
  • README – This is the release notes for linux kernel. it explains how to install and patch the kernel , and what to do if something goes wrong .


We can use make documentation targets to generate linux kernel documentation. By running these targets, we can construct the documents in any of the formats like pdf, html,man page, psdocs etc.

For generating kernel documentation, give any of the commands from the root of your kernel sources.

make pdfdocs
make htmldocs
make mandocs
make psdocs

Source Browsing

Browsing source code of a large project like linux kernel can be very tedious and time consuming . Unix systems have provided two tools, ctags and cscope for browsing the codebase of large projects. Source code browsing becomes very convenient using those tools. Linux kernel has built-in support for cscope.

Using cscope, we can:

  • Find all references of a symbol
  • Find function’s definition
  • Find the caller graph of a function
  • Find a particular text string
  • Change the particular text string
  • Find a particular file
  • Find all the files that includes a particular file

Kernel Mode and Context

December 7, 2015 | Article | No Comments

User Mode and Kernel Mode

In Linux, application / software is fall into two category: user programs and kernel. Linux kernel runs under a special privileged mode compared to user applications. In this mode, kernel runs in a protected memory space and has access to entire hardware. This memory space and privileged state collectively is known as kernel space or kernel mode.

On contrary, userapplications run under user-space and have limited access to resources and hardware. User space application can’t directly access hardware or kernel space memory, but kernel has access to entire memory space. To communicate with hardware, a user application need to do system call and ask service from kernel.

Different Contexts of Kernel Code

Entire kernel code can be divided into three categories.

  1. Process Context
  2. Interrupt Context
  3. Kernel Context

Process Context

User applications can’t access the kernel space directly but there is an interface using which user applications can call the functions defined in  the kernel space. This interface is known as system call. A user application can  request for kernel services using a system call.

read() , write() calls are examples of a system call. A user application calls read() / write() , that in turn invokes sys_read() / sys_write() in the kernel space . In this case kernel code executes the request of user space application. At this point, a kernel code that executes on the request or on behalf on a userapplications is called process context code. All system calls fall in this category.

Interrupt Context

Whenever a device wants to communicate with the kernel, it sends an interrupt signal to the kernel. The moment kernel receives an interrupt request from the hardware, it starts executing some routine in the response to that interrupt request. This response routine is called as interrupt service routine or an interrupt handler. Interrupt handler routine is said to execute in the interrupt context.

Kernel Context

There is some code in the linux kernel that is neither invoked by a user application nor it is invoked by an interrupt. This code is integral to the kernel and keeps on  running always . Memory management  , process management , I/O schedulers , all that code lies in  this category. This code is said to execute in the kernel context.

Linux Kernel, Components, and Integration

December 7, 2015 | Article | No Comments

Kernel and Linux Kernel

In term of Computer Science, Kernel is the core of an Operating System. A machine (for example: personal computer) can use various hardware produced by different vendors. All can be assembled into a single machine. A hardware like processor, RAM, Hard Disk, etc is the component to build a computer. But once the computer is built, we need an operating system to make all of these hardware can be operated. The kernel does the job.

Operating System receives the request from user and processes it on user’s behalf. Requests are received by command shell or some other kind of user interface and are processed by the kernel. So, kernel acts like an engine of the operating system which enables a user to use a computer system. Shell is the outer part of the operating system that provides an interface to the user for communicating with kernel.

Kernel is bridging Applications and Hardware

Linux is one of Kernel. It is a UNIX-like kernel created by Linus Torvalds on 1991. Linux is Open Source means everyone can contribute, develop, and make their own kernel using Linus’ kernel. Nowadays, every smart system use kernel to operate and some of those system using (or maybe subset of) Linux.


If we observe more, kernel can be divide into some components. The major components forming a kernel are:

  • Low Level Drivers : Architecture specific drivers and are responsible for CPU, MMU and on-board devices initialization .
  • Process Scheduler : Scheduler is responsible for fair cpu time slice allocation to different processes. Imagine you have some resource and must ensure every application can use fair amount of resource.
  • Memory Manager : Memory management system is responsible for allocating and sharing memory to different processes.
  • File System : Linux supports many file system types, e.g. – fat, ntfs, jffs and lot more. User doesn’t have to worry about the complexities of underlying file system type. For this linux provides a single interface, named as virtual file system . Using a single Virtual File System interface users can use the services of different underlying file systems. The complexities of different file systems are abstracted from the user.
  • Network Interface : This component of linux kernel provides access and control to different networking devices.
  • Device Drivers : These are high level drivers .
  • IPC : Inter Process Communication , IPC subsystem allows different processes to share data among themselves.



We have seen that a kernel consists of different components. Integration design tells how these different components are integrated to create kernel’s binary image.

There are mainly two integration designs used for operating system kernels , monolithic and micro. Although there are more than two but we will limit our discussion into two most used integration design.

In monolithic design all the kernel components are built together into a single static binary image . At boot up time , entire kernel gets loaded and then runs as a single process in a single address space. All the kernel components/services exist in that static kernel image . All the kernel services are running and available all the time.

Since inside the kernel everything resides in a single address space, no IPC kind of mechanism is needed for communicating between kernel services. For all these reasons monolithic kernels are high performance. Most of the unix kernels are monolithic kernels.

The disadvantage of this static kernel is lack of modularity and hotswap ability. Once the static kernel image is loaded, we can’t add/remove any component or service from the kernel. Our option is only change the hardcoded kernel. Another reason is that kernel use much memory. So, resource consumption is higher in case of monolithic kernels.

The second kind of kernel is microkernel. In microkernel a single static kernel image is not built, instead kernel image is broken down into different small services.

At boot up time , core kernel services are loaded , they run in privileged mode . Whenever some service is required , it has to get loaded for running.

Unlike monolithic kernel all services are not up and running all the time. They run as and when requested. Also, unlike monolithic kernels , services in microkernels run in separate address spaces. Therefore communication between two different services requires IPC mechanism . For all these reasons microkernels are not high performance kernels but they require less resources to run .

Linux kernel takes best of both these designs. Fundamentally it is a monolithic kernel. Entire linux kernel and all its services run as a single process , in a single address space , achieving very high performance . But it also has the capability to load / unload services at run time in the form of kernel modules .

Social Share Buttons and Icons powered by Ultimatelysocial