Category: Uncategorized

Home / Category: Uncategorized

NodeJS TCP

December 9, 2015 | Uncategorized | No Comments

TCP or Transmission Control Protocol, is a connection oriented protocol for data communication and data transmission in network. It provides reliability over transmitted data so the packets sent or received is guaranteed to be in correct format and order.

Node has a first-class HTTP module implementation, but this descends from the “bare-bones” TCP module. Being so, everything described here applies also to every class descending from the net module.

TCP Server

We can create TCP server and client, using “net” module.

Here, how we create a TCP server.

require('net').createServer(function(socket) {
    // new connection
    socket.on('data', function(data) {
        // got data
    });

    socket.on('data', function(data) {
        // connection closed
    });

    socket.write('Some string');
}).listen(4001);

Here our server is created using “net” module and listen to port 4001 (to distinguish with our HTTP server in 4000). Our callback is invoked every time new connection arrived, which is indicated by “connection” event.

On this socket object, we can then listen to “data” events when we get a package of data and the “end” event when that connection is closed.

Listening

As we saw, after the server is created, we can bind it to a specific TCP port.

var port = 4001;
var host = '0.0.0.0';
server.listen(port, host);

The second argument (host) is optional. If omitted, the server will accept connections directed to any IP address.

This method is asynchronous. To be notified when the server is really bound we have to pass a callback.

//-- With host specified
server.listen(port, host, function() {
    console.log('server listening on port ' + port);
});

//-- Without host specified
server.listen(port, function() {
    console.log('server listening on port ' + port);
});

Write Data

We can pass in a string or buffer to be sent through the socket. If a string is passed in, we can specify an encoding as a second argument. If no encoding specified, Node will assume it as UTF-8. The operation are much like in HTTP module.

var flush = socket.write('453d9ea499aa8247a54c951', 'base64');

The socket object is an instance of net.Socket, which is a writeStream, so the write method returns a boolean, saying whether it flushed to the kernel or not.

We can also pass in a callback. This callback will be invoked when data is finally written out.

// with encoding specified
var flush = socket.write('453d9ea499aa8247a54c951', 'base64', function(){
    // flushed
});

// Assuming UTF-8
var flush = socket.write('Heihoo!', function(){
    // flushed
});

.end()

Method .end() is used to end the connection. This will send the TCP FIN packet, notifying the other end that this end wants to close the connection.

But, we can still get “data” events after we have issued this. It is simply because there still might be some data in transit, or the other end might be insisting on sending you some more data.

In this method, we can also pass in some final data to be sent:

socket.end('Bye bye!');

Other Methods

Socket object is an instance of net.Socket, and it implements the WriteStream and ReadStream interface, so all those methods are available like pause() and resume(). We can also bind to the “drain” events like other stream object can do.

Idle Sockets

A socket can be in idle state, or idle for some time. For example, there has been no data received at moment. When this condition happen, we can be notified by calling setTimeout():

var timeout = 60000;    // 1 minute
socket.setTimeout(timeout);
socket.on('timeout', function() {
    socket.write('idle timeout, disconnecting, bye!');
    socket.end();
});

or in shorter form:

socket.setTimeout(60000, function() {
    socket.end('idle timeout, disconnecting, bye!');
});

Keep-Alive

Keep-alive is mechanism to make the server prevent timeout. The concept is very simple: when we set up a TCP connection, we associate a set of timers and some of it deal with the keep-alive procedure. When the keep-alive timer reaches zero, we send our peer a keep-alive probe packet with no data in it and the ACK flag turned on.

In Node, all the functionality has been simplified. So, we can send keep-alive notification by invoking.

socket.keepAlive(true);

We can also speficy the delay between the last packet received and the next keep-alive packet on the second argument to the keep-alive call.

socket.keepAlive(true, 10000);    // 10 seconds

Delay or No Delay

When sending off TCP packets, the kernel buffers data before sending it off and uses the Naggle algorithm to determine when to send off the data. If you wish to turn this off and demand that the data gets sent immediately after write commands, use:

socket.setNoDelay(true);

Of course we can turn it on by simply invoking it with false value.

Connection Close

This method closes the server, preventing it from accepting new connections. This function is asynchronous, and the server will emit the “close” event when actually closed:

var server = ...
server.close();
server.on('close', function() {
    console.log('server closed!');
});

TCP Client

We can create a TCP client which connect to a TCP server using “net” module.

var net = require('net');
var port = 4001;
var host = 'www.google.com';
var conn = net.createConnection(port, host);

Here, if we omitted the host for creating connection, the defaults will be localhost.

Then we can listen for data.

conn.on('data', function(data) {
    console.log('some data has arrived');
});

or send some data.

conn.write('I send you some string');

or close it.

conn.close();

and also listen to the “close” event (either by yourself, or sent by peer)

conn.on('close', function(data) {
    console.log('connection closed');
});

Socket conforms to the ReadStream and WriteStream interfaces, so we can use all of the previously described methods on it.

Error Handling

When handling a socket on the client or the server, we can (and should) handle the errors by listening to the “error” event.

Here is simple template how we do:

require('net').createServer(function(socket) {
    socket.on('error', function(error) {
        // do something
    });
});

If we don’t catch an error, Node will handle an uncaught exception and terminate the current process. Unless that’s what we want, we should handle the errors.

Firefox OS is a smartphone and table Operating System developed by Mozilla. It is developed under the same brand of Mozilla’s famous web browser project, Mozilla Firefox. Mozilla has a big and clear vision. Firefox OS is developed to adapt the dynamic of the web. It anticipates users needs, adapts to every situation, and instantly delivers information user want.

The interesting part is, Mozilla has develop an add-on for developing and simulating Firefox OS on top of Firefox browser.

This article will discuss about how can we prepare a portable development environment and simulating Firefox OS.

About Firefox OS Simulator

Like implied by the name, Firefox OS simulator is a simulator for Firefox OS. As per September 29th 2013, the current Firefox OS Simulator is still at an early stage of development. Thus, it isn’t yet as reliable and complete. Mozilla also encourage to report bug to GitHub and participate to the development process.

The big concept Firefox OS has is an add-on for Firefox browser. Using this add-on we can test and debug Firefox OS app on the desktop. It is like Android Emulator, but lightweight and easy to use. But, as told above, don’t expect too much for now. But still, it’s quite promising 😉

So, what’s bundled to this add-on?

  1. the Simulator: Including Firefox OS desktop client. This is the higher layers of Firefox OS that runs on desktop. The simulator also includes some additional emulation features that aren’t in the standard Firefox OS desktop builds.
  2. the Dashboard: a tool hosted by the Firefox browser that enables you to start and stop the Simulator and to install, uninstall, and debug apps running in it. The Dashboard also helps you push apps to a real device, and checks app manifests for common problems.

Installing and Running the Simulator

Installing the simulator is really easy. As easy as installing any other Firefox add-on.

  1. Using Firefox, go to the Simulator’s page.
  2. Click “Add to Firefox”
  3. Once the add-on has been downloaded, you will be prompted to install it. Click “Install Now”.

The add-on is quite big in size, thus Firefox may freeze for several seconds while installing it. A dialog titled “Warning: Unresponsive script” might appear. If it does, just click “Continue” to wait for installation to finish.

The Dashboard opens automatically when you install the Simulator. You can reopen it at any time by going to the “Firefox” menu (or “Tools”), then “Web Developer”, then “Firefox OS Simulator”.

firefoxos-1

And here is the Dashboard for you:

firefoxos-2

Here, on the left panel is a button to activate the simulator.

The right panel will list all application you have developed and can be run on top of Firefox OS (simulator).

Managing Apps

Adding apps

To add a packaged app to the Simulator, open the Dashboard, click “Add Directory” and select the manifest file for your app.

To add a hosted app, enter a URL in the textbox where it says “URL for page or manifest.webapp”, then click “Add URL”. If the URL points to a manifest, then that manifest will be used. If it doesn’t, the Dashboard will generate a manifest for the URL: so you can add any website as an app just by entering its URL.

When you add an app, the Dashboard will run a series of tests on your manifest file, checking for common problems. See the section on Manifest Validation for details on what tests are run.

Unless manifest validation reveals that your app has errors, the Dashboard will then automatically run your app in the Simulator.

On each entry, you will see information about the app:

  • name, taken from the manifest
  • type, which will be one of “Packaged”, “Hosted”, or “Generated”
  • link to manifest file
  • result of manifest validation

You also got four commands:

  • “Refresh”: use this to update and reload the app in the Simulator after you have made changes to it. This also makes the Dashboard validate the manifest again. If you make changes to your app they will not be reflected automatically in the installed app: you will need to refresh the app to apply the changes.
  • “Connect”: use this to connect developer tools to the selected app. The Dashboard will start the Simulator and app if they aren’t already running.
  • “Remove” (“X”): use this to remove the app from the Simulator and the Dashboard. You can undo this action as long as the Dashboard tab is open.
  • “Receipt”: use this to test receipt verification for paid apps. After you select a type of receipt to test, the app will be reinstalled with a test receipt of the given type.

The Simulator Device

Here is what the simulator looks like. The simulator device will be opened on new Firefox window.

firefoxos-3

The simulator can be started in two different ways:

  • adding an app or click the “Refresh” or “Connect” button next to your app’s entry, the Dashboard will automatically run your app in the Simulator
  • clicking the button labeled “Stopped” on the left-hand side of the Dashboard, the Simulator will boot to the Home screen and you’ll need to navigate to your app

The simulator appear as window 320×480 pixels with a tool bar at the bottom and a menubar at the top that contains some extra features:

You can left-click on Firefox OS area to simulate touch events. To simulate drag, click and hold the button to your preferred area. Here is the example. The left is the default launcher and drag to right will bring you to menu.

Now we have seen toolbar. Let’s play around with it.

There are three buttons on toolbar. From left to right, these are: Home button, Screen Rotation button, and the Geolocation button.

  • the Home button takes you to the Home screen (or to the task list if you keep it pressed for a couple of seconds)
  • the Screen Rotation button switches the device between portrait and landscape orientation. This will generate the orientationchange event.
  • the Geolocation button triggers a dialog asking you to share your geographic location, either using your current coordinates or supplying custom coordinates: this will be made available to your app via the Geolocation API.

Developer Tools

You can attach developer tools to the Simulator, to help debug your app. At the moment you can only attach the JavaScript Debugger, the Web Console, the Style Editor, the Profiler and the Network Monitor. Mozilla promise bringing more support of developer tools.

To attach developer tools to the Simulator, click the “Connect” button for an app. The Dashboard will then open a developer toolbox pane at the bottom of the Dashboard tab and connect it to the app:

Web Console

The app can log to this console using the global console object, and it displays various other messages generated by the app: network requests, CSS and JS warnings/errors, and security errors.

Debugger

Using the Debugger, you can step through JavaScript code that is running in the connected app, manage breakpoints, and watch expressions to track down errors and problems faster.

Style Editor

You can view and edit CSS files referenced in the app using the connected Style Editor. Your changes will be applied to the app in real time, without needing to refresh the app.

Profiler

Using the Profiler tool connected to the app, you can to find out where your JavaScript code is spending too much time. The Profiler periodically samples the current JavaScript call stack and compiles statistics about the samples.

Network Monitor

Thanks to the new Network Monitor, you can analyze the status, headers, content and timing of all the network requests initiated by the app through a friendly interface.

Arch Linux ARM is a Linux operating system distribution for ARM architecture. This distribution is one of recommended OS for Raspberry Pi board.

By default, Arch Linux ARM has a global repositories. On some point, the repository can redirect your pacman to appropriate (other) repository based on your location. But sometimes, it’s not enough.

In this article, we will discuss about how to change ArchLinux ARM repository manually.

Preparation

Make sure your Pi has Arch Linux ARM inside. You should also make sure your Pi can connect to internet.

You need to acquire root privileges. We need root privileges to edit pacman configuration file.

Configuration

First see the /etc/pacman.d/mirrorlist and search location closest to you. If you see a preferrable repository, you can start edit this file.

Comment out line under “Geo-IP based mirror selection and load balancing”:

# Server = http://mirror.archlinuxarm.org/armv6h/$repo

and uncomment the mirror server you choose, for example I choose Finland:

Server = http://fi.mirror.archlinuxarm.org/armv6h/$repo

If you have another mirror which are not listed there (you have to really sure about it), you can comment out every line and add new line at the bottom. For example, the mirror server is http://archarm.xathrya.web.id, then you write following line:

Server = http://archarm.xathrya.id/armv6h/$repo

Managing SquashFS File

December 9, 2015 | Uncategorized | No Comments

SquashFS is a compressed read-only file system. SquashFS compress files, inodes, and directories, and supports block sizes up to 1 MB for greater compression.

SquashFS is intended for general read-only file system use and in constrained block device / memory systems (e.g. embedded systems) where low overheaded is needed. Originally it use gzip compression, now it support LZMA, LZMA2 and LZO compression.

In this article, we will discuss about some notes about how we can manage SquashFS. The term manage in this arcitle including: creating, mounting, and modifying SquashFS file. In this article I use Slackware64 14.0 as host. However you can use any Linux distribution which has SquashFS tools installed.

You can always skip all basic and introduction, but I recommend you to read it.

Preparation

Check whether you have installed required tools. Test by invoking following binaries:

mksquashfs
unsquashfs

If you don’t have it yet, you can download or compile it from source. Just go to their official site.

For Slackware, go to Slackbuilds site for SquashFS.

I assume you have the required tools.

Overview

As stated before, SquashFS is a compressed read-only file system. You can imagine it as a file which are exactly a file system. It’s like database file for sqlite, or a disk image for various Virtual Machine apps but only serve a purpose as a filesystem. The read-only filesystem means it can only be read, no write operation allowed. Modification will require dissection of the file.

Other important things:

  1. Data, inodes, directories are compressed
  2. SquashFS stores full uid/gids (32 bits), and file creation time
  3. Files up to 2^64 bytes are supported; file systems can be up to 2^64 bytes
  4. Inode and directory data are highly compacted, and packed on byte boundaries; each compressed inode is on average 8 bytes in length (the exact length varies on file type, i.e. regular file, directory, symbolic link, and block/character device inodes have different sizes)
  5. SquashFS can use block sizes up to up to 64 Kb (2.x) and 1Mb (3.x). The default size is 128Kb (3.x), which achieves greater compression ratios than the normal 4K block size
  6. By the 2.x release it was introduced the concept of fragment blocks: an ability to join multiple files smaller than block size into a single block, achieving greater compression ratios
  7. File duplicates are detected and removed
  8. Both big and little endian architectures are supported; SquashFS can mount file systems created on different byte-order machines

SquashFS Capable Kernel

The on-disk format of SquashFS has stabilized enough that it has been merged into the 2.6.29 version of Linux kernel. I assume you use kernel with version greater than 2.6.29. If you are not, you should patch your kernel. But, I think today distribution has ship kernel greater than 2.6.29.

The Tools

You should check the command-line options for mksquashfs and unsquashfs for each capabilities.

Using mksquashfs

mksquashfs is the tool for creating new squashed file systems, and for appending new data to existing squashed file systems. The general command-line format for mksquashfs is:

mksquashfs source1 source2 ... destination [options]

Where:

  1. source1, source2, etc are files and directories to be added to the resulting filesystem, given with relative and/or absolute paths.
  2. destination is a regular file or a block device (such as /dev/fd0 or /dev/sda2) where the squashed file system will be created.

Notes, for default mksquashfs behavior:

  • When the new files are added to the new file system or appended to an existing one, mksquashfs will automatically rename files with duplicate names: if two or more files named text will appear in the same resulting directory, the second file will be renamed to text_1, third one to text_2 and so on.
  • Duplicate files will be removed, so there will be only one physical instance (By the SquashFS 2.x, you can disable the detection/removal of the duplicates with the -no-duplicates option).
  • If destination has a pre-existing SquashFS file system on it, by default, the new source items will be appended to the existing root directory. Examine the options table below to force mksquashfs to overwrite the whole destination and/or change the way new source items are added.
  • If a single source file or directory is given, it becomes the root in a newly created file system. If two or more source files and/or directories are given, they will all become sub-items in the root of the new file system.
  • The resulting filesystem will be padded to a multiple of 4 Kb: this is required for filesystems to be used on block devices. If you are very sure you don’t ned this, use the -nopad option to disable this operation.

Using unsquashfs

unsquashfs is the tool for extracting data from squashed file systems. The general command-line format for unsquashfs is:

unsquashfs [options] target [files/directories to extract]

Where the target is the squashed file system to extract.

Notes for unsquashfs behavior:

  • By not specifying any destination path, unsquashfs extracts the compressed file system in the ./squashfs-root directory.
  • The tool does not extract a squashed file system on already exsisting directory unless the -f option is specified.
  • You can specify on the command line, a multiple number of files/directories to extract and the items to be extracted can be also be given in a file with -e [file] option.

Managing SquashFS

This section will describe the use of mksquashfs and unsquashfs for managing SquashFS file.

Creating a SquashFS File

Creating Simple SquashFS

Creating a squashed file system out of a single directory (for ex: /some/dir), and output it to a regular file as image.sqsh.

mksquashfs /some/dir image.sqsh

Note that the output filename could be anything and any extension.

mksquashfs then will perform the squashing and print the resulting number of inodes and size of data written, as well as the average compression ratio. Now we have /some/dir directory image in image.sqsh file.

The content of image.sqsh would be any file or directory inside of /some/dir.

If you want to output the filesystem directly into a device, i.e. USB flash disk at /dev/sdb1:

mksquashfs /some/dir /dev/sdb1

Squashing File Systems

This section will describe how to create read-only compressed file system.

Mounting SquashFS File

To loop SquashFS file, you will use loopback device. The command is:

mount -o loop -t squashfs image.sqsh /mnt/dir

For image.sqsh as the SquashFS file and /mnt/dir as the mount point.

You should see that the SquashFS can only be accessed read-only.

To mount at boot time, you should at this entry to /etc/fstab (assume the image is image.sqsh and the mount point is /mnt/dir):

/some/dir/image.sqsh /mnt/dir squashfs loop 0 0

SquashFS and Write Operation using UnionFS

Unionfs provides copy-on-write semantics for the read-only file systems (such as SquashFS) which enhancing the possibilities. More detail on the UnionFS can be seen at http://www.filesystems.org/project-unionfs.html

Basically, UnionFS is a union of both writable filesystem and read-only filesystem, merged to a single mount point.

Example:

You may want to make your /home/user squashed, to compress and backup your files without losing the possibility to apply changes or writing new files.

Create the ro.fs squashed file system and the rw.fs dir:

mksquashfs /home/user1 ro.fs
mkdir /home/rw.fs

Mount the squashed ro.fs file system using the loopback device:

mount -t squashfs -o loop ro.fs /mnt

Mount the unionfs filesystem as union of /mnt/and /home/rw.fs. This will be merged under /home/user1 location:

cd /home
mount -t unionfs -o dirs=rw.fs=rw:/mnt=ro unionfs user1

The write operations under rw.fs will be done to /home/rw.fs.

To mount squashed directory tree at startup, you should load the squashfs and unionfs modules at boot time. Change the owner of the writable branch to match user1.

Here is the complete command list:

echo squashfs >> /etc/modules
echo unionfs >> /etc/modules
chown user1 /home/rw.fs

Add appropriate entry to /etc/fstab to mount the SquashFS and UnionFS at boot time.

/home/ro.fs /mnt/squashfs loop 0 0
unionfs /home/user1 unionfs dirs=/home/rw.fs=rw:/mnt=ro 0 0

Extract the File System

Also known as unsquashing. The unsquash operation will decompress the squashed file.

Suppose we have a file image.sqsh and want to unsquash it:

unsquashfs image.sqsh

A new directory will be created, as squashfs-root. The content of squashfs-root will be the content of file you or directory you use to create the squashed filesystem.

Emulating Raspberry Pi using QEMU

December 9, 2015 | Uncategorized | 1 Comment

Previously, we have discussed installation of various Raspberry Pi‘s Operating System. All have been done on the actual board. But what if we are in situation where we can’t afford the board? Emulation sure is the way to experiment and QEMU open the possibility.

Although emulation can make our life easy, you can’t always rely on emulation. Remember that emulation trying to imitate the real hardware, but can’t be the one.

In this article, we will discuss about how to emulate Raspberry Pi board using QEMU. For this purpose I use:

  1. Slackware64 14.0 as host system
  2. QEMU 1.4.0
  3. Soft-Float Debian Wheezy with version 2013-05-29.
  4. OpenVPN

For Windows machine, you can go to this article instead.

Obtain the Material

Make sure you have QEMU installed. For Linux, you can follow this article to compile QEMU (using Slackware64 as example).

Next, download the images. As stated above, we will use Soft Float Debian Wheezy. You can download it from Raspberry Pi’s official download page. The version I use is the 2013-05-29 or latest version per “August 24th 2013” and can be downloaded here.

We also require linux kernel for QEMU. Download it from here.

The Hardware – Overview

Before we go to next section, it is always a good idea to understand what system we want to work with. In this case, raspberry Pi. You can see the Architecture of Raspberry Pi for detail.

For the system we will create in using QEMU:

  1. ARM1176JZF-S, see the datasheet here
  2. Ethernet

Note that QEMU cannot emulate GPIO and GPU.

Also, to encapsulate network we need TAP network for full networking support. See next section.

Preparing the environment

Create a working directory. In this case, qemu-raspberry.

Now open up command prompt and enter the working directory.

Extract the disk image and kernel image we have downloaded on previous section.

Check whether you have /dev/net directory and /dev/net/tun or not. If you don’t have it yet, do following:

mkdir /dev/net
mknod /dev/net/tun c 10 200
/sbin/modprobe tun

Now create the two new files, /etc/qemu-ifup and /etc/qemu-ifdown. Make sure you give them executable permission.

#!/bin/bash
ETH0IPADDR=192.168.1.100
GATEWAY=192.168.1.1
BROADCAST=192.168.1.255

# First take eth0 down, then bring it up with IP address 0.0.0.0
/sbin/ifconfig eth0 down
/sbin/ifconfig eth0 0.0.0.0 promisc up

# Bring up the tap device (name specified as first argument, by QEMU)
/usr/sbin/openvpn --mktun --dev $1 --user `id -un`
/sbin/ifconfig $1 0.0.0.0 promisc up

# Create the bridge between eth0 and the tap device
/usr/sbin/brctl addbr br0
/usr/sbin/brctl addif br0 eth0
/usr/sbin/brctl addif br0 $1

# Only a single bridge so loops are not possible, turn off spanning tree protocol
/usr/sbin/brctl stp br0 off

# Bring up the bridge with ETH0IPADDR and add the default route

/sbin/ifconfig br0 $ETH0IPADDR netmask 255.255.255.0 broadcast $BROADCAST
/sbin/route add default gw $GATEWAY

and

#!/bin/bash

# Bring down eth0 and br0
/sbin/ifconfig eth0 down
/sbin/ifconfig br0 down

# Delete the bridge
/usr/sbin/brctl delbr br0

# Bring up eth0 in "normal" mode
/sbin/ifconfig eth0 -promisc
/sbin/ifconfig eth0 up

# Delete the tap debice
/usr/sbin/openvpn --rmtun --dev $1

Make sure you already have openvpn installed.

Booting the disc image

Make sure the TAP interface have been activated. You just need activated it once and use it until you had enough. Do following:

/etc/qemu-ifup tap0

To boot the QEMU, invoke following command (make sure all materials have been downloaded and extracted):

qemu-system-arm -kernel kernel-qemu -cpu arm1176 -m 256 -M versatilepb \
-no-reboot -serial stdio -append "root=/dev/sda2 panic=1" -hda 2013-05-29-wheezy-raspbian.img \
-net nic -net tap,ifname=tap0,script=no,downscript=no

When you decide to stop TAP interface, do following:

/etc/qemu-ifdown

Here’s the result:

pi_qemu_linux

This article will explains some of the more important syntactic and semantic differences between two different assembler style: Intel and AT&T. The AT&T style is used by GNU Assembler (GAS) and the Intel style used by Netwide Assembler (NASM).

Although the goal of this article would be specifically show the differences between syntax, the source codes provided here has been tested on Linux machine using corresponding assembler. This article is written in purpose to help you more easily convert from one flavor of assembler to another.

Building the Program

When there is a source code involved, you can use following command to build the example. Do assembling and linking, depend on your compiler.

Assembling

ELF (Executable and Linkable Format) is executable format used by Linux.

nasm -f elf -o program.o program.asm
as -o program.o program.s
Linking
ld -o program program.o
Linking when an external C library used
ld --dynamic-linker /lib/ld-linux.so.2 -lc -o program program.o

Basic Structure

The structure of a program is at least consists of a section for code, heap, and stack.

; Intel format

; Text segment begins
section .text

   global _start

; Program entry point
   _start:

; codes are written here
# AT&T format

# Text segment begins
.section .text

   .globl _start

# Program entry point
   _start:

# codes are written here

1. Operands Order

One of the noticeable difference between the AT&T and Intel formats is the way they refer to source and destination operands within an instruction. The order of source and destination operand are swapped each other.

Under Intel format, after the instruction come destination followed by a comma and the source operand (the first operand is the destination). Under the AT&T these roles are reversed: the source comes before the comma and the destination (the first operand is source).

; Intel format

PUSH EBP
MOV EBP, ESP
SUB ESP, 48
CMP [EBP+4], 10
# AT&T format

pushl %ebp
movl %esp, %ebp
subl $0X48, %esp

At first glance, AT&T is likely more natural as we read and write from left to right. However, this lead to some flaw and inconsistencies (won’t be discussed here).

The primary reason of AT&T source-operand order is due to the VAX assembly format for which the AT&T was originally invented. The Motorola 68000 and its descendents were heavily influenced by the VAX. Likewise, their assembly language format moves in this direction as well.

2. Naming

2.1 Value & Register Naming

AT&T is quite famous for it’s heavy use of prefixes. In AT&T, registers are prefixed with a ‘%’ and immediate value are prefixed with a ‘$’. Intel in other hand, use plain naming for registers and immediate value.

Intel syntax sure doesn’t use prefix for writing register. However it uses a suffix to distinguish hexadecimal and binary number from decimal. The ‘h’ suffix used for hexadecimal number and the ‘b’ suffix used for binary number. Also, Intel use 0 in front of the hexadecimal number, while AT&T use 0x.

; Intel format

MOV EAX, 1
MOV EBX, 0FFh
INT 80h
# AT&T format

movl $1, %eax
movl $0xFF, %ebx
int $0x80

2.2 Instruction Naming

AT&T format use slightly different names than the Intel format. They differ in keeping with VAX and Motorola traditions where instruction names include a suffix which describes the size of the data they modify. Under Intel format, these data size directives are normally described using the ‘BYTE PTR’, ‘WORD PTR’, ‘DWORD PTR’, prefix phrases.

; Intel format

MOVZX EAX, BYTE PTR [ESI+5]
SUB EAX, 30
DEC WORD PTR [EBX]
INC CX
CMP AL,5
# AT&T format

movzbl 0x5(%esi), %eax
subl $0x30, %eax
decw (%ebx)
incw %cx
cmpb $0x5, %al

Under AT&T format, instruction suffixes are “b” for byte size operations (8-bits), “w” for word size operations (16-bits), “l” for double-word operations (32-bits), and “q” for quad-word operations (64-bits). As you may notice in the ‘movzbl’ (Move with zero-extend) instruction, more than one suffix letter is used when an instruction’s source and destination operand differ in size. The first suffix letter describes the source operand while the second letter describes the destination.

3. Memory Addressing

Under Intel format, memory addressing is simple calculation of address enclosed by ‘[‘ and ‘]’. The AT&T format in other hand use ‘(‘ and ‘)’ to enclose it.

3.1 Indirect Addressing Mode

Recall to x86 assembly language, there are five indirect addressing modes when writing an instruction:

  1. immediate indirect
  2. register indirect
  3. base register + offset indirect
  4. index register * width + offset indirect
  5. and base register + index register * width + offset indirect.
; Intel format

; Immediate
MOV EAX, [0100]

; Register
MOV EAX, [ESI]

; Register + Offset
MOV EAX, [EBP-8]

; Register * Width + Offset
MOV EAX, [EBX*4 + 0100]

; Base + Register*Width + Offset
MOV EAX, [EDX + EBX*4 + 8]
# Intel format

# Immediate
movl 0x0100, %eax

# Register
movl (%esi), %eax

# Register + Offset
movl -8(%ebp), %eax

# Register * Width + Offset
movl 0x100(,%ebx,4), %eax

# Base + Register*Width + Offset
movl 0x8(%edx, %ebx,4), %eax

Under AT&T format, indirect addressing mode are written to the general form of “OFFSET(BASE, INDEX, WIDTH)”. OFFSET, if present, must be a constant integer. BASE and INDEX, if either is present, must be registers. WIDTH, if present, applies to the register named in Index, and must be the constant 1,2, 4, 8. If width is not specified, the default constant 1 is taken.

Under Intel, indirect addressing is as simple as math formula “[BASE + INDEX*WIDTH + OFFSET] using the same terminology as used by AT&T format.

Under AT&T, all immediate addresses are written simply as an OFFSET with a missing BASE, INDEX, and WIDTH parameter. The immediate indirect address is written by itself with no special prefix or suffix characters.

Install Darktable from Source

December 9, 2015 | Uncategorized | No Comments

Darktable is an open source photography workflow application and RAW developer. A virtual lighttable and darkroom for photographers. It can manages digitals negatives in database, let us view them through a zoomable lighttable and enables us to develop raw images and enhance them.

In this article we will discuss about installing darktable for Linux operating system. For this purpose I use Slackware64 14.0 as test machine. The method we use will be source compilation.

Dependency

Make sure you met dependencies listed here:

  1. intltool
  2. libatk
  3. libbabl
  4. libgegl
  5. libcairo2
  6. libexiv2
  7. libfontconfig1
  8. libfreetype6
  9. libgomp
  10. libgtk2
  11. libjpeg
  12. libtiff4
  13. libcms2
  14. liblensfun
  15. libpng12
  16. libsqlite3
  17. libstdc++6.4.4
  18. libxml2
  19. libopenexr
  20. libcurl4 gnu TLS
  21. libgphoto2-2
  22. libdbus glib
  23. libgnome-keyring
  24. librsvg2
  25. fop
  26. libflickcurl
  27. cmake

Obtain the Materials

The source code of darktable can be downloaded using git. Therefore it is recommended to install git before we proceed.

git clone git://github.com/darktable-org/darktable.git
cd darktable

Or alternatively, we can download the latest source code from sourceforge and extract it.

Installation

Assuming we have got the source code and now at the root of source directory. The default installation will be /opt. If another path is preferred, use –prefix argument for example –prefix /usr/local. In this article we will use /usr/local

./build.sh --prefix /usr/local
cd build
make
make install

Make sure we have enough privileges to do installation.

At this point we have installed the darktable.

Glossary

In this article we use some term:

  • RAW: unprocessed capture straight from the camera’s sensor to the memory card, nothing has been altered.

Installing SQLite on Windows

December 9, 2015 | Uncategorized | 1 Comment

SQLite is an in-process library that implements a self-contained, serverless, zero-configuration, transactional SQL database engine. The code for SQLite is in the public domain and is thus free for use for any purpose, commercial or private. SQLite is currently found in more applications than we can count, including several high-profile projects.Unlike most other SQL databases, SQLite is and embedded SQL database engine. It does not have separate server process.SQLite reads and writes directly to ordinary disk files. All power of database is contained in a single disk file. The file format is cross-platform which can be adopted to 32-bit and 64-bit systems or between big-endian and little-endian architectures. SQLite is not as a replacement for complex database, but as replacement for fopen() primitive.

In this article we will discuss about installing SQLite for Windows operating system. For this purpose I use Windows 7 32-bit as test machine. The method we use will be binary installation.

Obtain the Materials

SQLite latest version is 3.7.17. Download the source code from SQLite’s main download page. You need to download following:

  1. A command-line shell for accessing and modifying SQLite databases. This program is compatible with all versions of SQLite through 3.7.17 and beyond. Download here.
  2. This ZIP archive contains a DLL for the SQLite library version 3.7.17 for 32-bit x86 processors using the Win32 API. The DLL is built using SQLITE_ENABLE_COLUMN_METADATA so that it is suitable for use with Ruby on Rails. Download here.
  3. (optional) An analysis program for database files compatible with all SQLite versions through 3.7.17 and beyond. Download here.

Installation

Assuming we have downloaded the above files:

  • Create a directory (folder) C:\>sqlite and unzip the zipped files into this folder. This should gives us sqlite3.exe, sqlite3.def, and sqlite3.dll. If you have download the analysis program, then you will get sqlite3_analyzer.exe.

  • Add C:\>sqlite in your PATH environment variable

  • open command prompt (cmd) and issue sqlite3 command which should display a result something as below.

C:\>sqlite3
SQLite version 3.7.15.2 2013-01-09 11:53:05
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite>

At this point we have installed the SQLite.

To explore / interact with SQLite for first time, see this article.

Installing Firebird on Linux

December 9, 2015 | Uncategorized | No Comments

Firebird is a relational database offering many ANSI SQL standard features that runs on various operating system. It offers excellent concurrency, high performance, and powerful language support for stored procedures and triggers.

In this article we will discuss about installing Firebird in Linux. For this purpose I use Slackware64 14.0 as test machine. However, it can also be installed to other Linux distribution than Slackware. The method we use will be source installation.

Obtain the Materials

Firebirdsql site use Sourceforge to host their source code, can be found here. The latest version we can find is Firebird 2.5.2 which can be downloaded here.

Installation

Assuming we have extract the archive and locate ourselves at source code’s directory, do following to do compilation and installation:

./configure
make
make install

Note that to do installation, you should have enough privilege.

At this point, we have installed Firebird SQL standard installed.

The goal of every programmer is producing a good program. Nearly all programmers strive to sharpen their skill in coding and engineering stuff. But programming is not only matters of code.

A good program is a program which is not only having good response and robust. But we should not forget that other factors are important for a project.

In this article, I list some characteristic of good program design according to me.

Minimal Complexity

The main goal in any program should be to minimize complexity.  As a developer, our time will be maintaining or upgrading existing code. If it is a complete mess, then your life is going to be that much harder. Try and avoid those solutions where you use a one line complex solution to replace 20 lines of easy to read code. In a year, when you come back to that code, it will take you that much longer to figure out what you did.

The best practice of it will be structuration of code and following some patterns. A modularity is also good for design, don’t write all codes into a single source file.

Ease of Maintenance

This is making your code easy to update. Find where your code is most likely going to change, and make it easy to update. The easier you make it, the easier your life is going to be down the road.  Think of it as a little insurance for your code.

Loose Coupling

So What is loose coupling? It is when one portion of code is not dependant on another to run properly. It is bundling code into nice little self reliant packages that don’t rely on any outside code.  How do you do this? Make good use of abstraction and information hiding.

Extensibility

This means that you design your program so that you can add or remove elements from your program without disturbing the underlying structure of the program. A good example would be a plug-in.

Reusability

Write code that will be able to be used in unrelated projects.  Save yourself some time. Again information hiding is your best bet for making this happen.

High Fan-in

This refers to having a large number of classes that use a given class. This implies that you are making good use of utility classes. For example you might have a bunch of classes that use the Math class to do calculations.

Low to Medium Fan-out

This refers to having a low to medium amount of classes used by any given class.  If you had a class that includes 7 or more classes this is considered high fan out.  So try and keep the number of classes you include down to a minimum.  Having a high fan-out suggests that the design may be too complex.

Portability

Simply put, design a system that can be moved to another environment.  This isn’t always a requirement of a program, but it should be considered. It might make your life easier if you find out your program does have to work on different platforms.

Leanness

Leanness means making the design with no extra parts.  Everything that is within the design has to be there.  This is generally a goal if you have speed and efficiency in mind.  A good example of where this might come in handy is creating a program that has to run on a system with limited resources (cell phone, older computers)

Standard Technique

Try and standardize your code.  If each developer puts in their own flavor of code you will end up with an unwieldy mess. Try to layout common approaches for developers to follow, and it will give your code a sense of familiarity for all developers working on it.

Social media & sharing icons powered by UltimatelySocial