Create Your Own Bash Command

You may want to have a quick command for personal use, on your Linux machine. Say you want a quick custom converter between currencies, or a quick calculation of time, or anything that you do regularly in your head that you would love to have a terminal command for.

The steps to create a command are simple:

  1. In any folder of your own, create a file with the name of the command.

    cd myfolder
    touch mycommand

  2. Open the file with any text editor and write a simple script that does what you need and outputs your result.

    For example, a simple multiplier for quick conversions:

    #!/bin/bash

    if [ $# -eq 0 ]
    then
    echo “No arguments supplied”
    exit
    fi

    rate=60
    converted=$(($1*rate))
    echo
    echo $1 hours = $converted minutes
    echo

  3. Copy the finished script to your local bin (which is usually part of your PATH already).

    sudo cp mycommand /usr/local/bin/

 

Now you can execute your command just by opening terminal and typing the command name along with the arguments.
eg. mycommand 10

Creating Your Personal Website

I always wanted to create a personal website that could serve as an online résumé. While I had created small pages locally, and also blogged, I had never created an entire website from scratch. Recently, I decided to jump into it and create a site of my own. I shall put down some quick pointers to help anyone wishing to make one too.

Keep in mind – this guide does not account for complex support such as data storage and user credentials.

Purchase your domain

While there are a number of sites that can give you a domain for free, they often append their own sub-domains to your URL. It’s always better to have a custom domain such as your-name.com. It’s easy to remember and looks good. However, you will have to pay a yearly subscription fee.

I also recommend that you buy privacy protection, which hides your personal details such as address and email ID while registering your domain. The site you are purchasing your domain name from should offer that as an add-on.

One of the popular sites to purchase domains is GoDaddy, where I bought mine as well.

Create your site locally

Web development with HTML, CSS, and JavaScript/jQuery can be done locally on your machine. Here are some handy tips.

  • If you do not want to develop your site from scratch, you can always use readymade templates online and customize them. You can find loads of templates by doing a quick Google search. However, if you wish to create everything yourself, go for it!
  • Use source control so you prevent the loss of your work. This guide leverages free hosting on GitHub, so if you wish to use that, you can choose GitHub for source control over other alternatives like Bitbucket – but keep in mind, your code will be visible to everyone if you do.
  • Use high-resolution images, if you are using any. Your site needs to look good even on high-resolution monitors, such as iMacs.
  • If you are looking for icons, you can find several on sites like Font Awesome.
  • Use Google Chrome to view your site. It has a built-in tool to view the site on different screen sizes, which comes in handy while developing a responsive site.

Host your site

There are loads of ways you can host your site. Sites such as cPanel do it for you, for a price. If you are looking for a free way to do it, GitHub Pages is ideal. The con is that anyone can see your source code, so use it if it’s fine by you.

Continue reading

Getting Started With CUDA Programming

Graphics Processing Units or GPUs were traditionally used for the purpose of rendering graphics on screen. They were optimized for throughput and could render millions of pixels simultaneously, by performing the same computations on millions of individual data elements in parallel. This immense processing power of GPUs is now being harnessed for general purpose applications as well. Parallel programming is becoming increasingly important today, in a world where CPUs are becoming harder to optimize for speed and energy and are just about reaching their limits.

For a beginner looking to get started with parallel programming, NVIDIA’s parallel programming framework known as CUDA may be a good place to start. NVIDIA’s GPUs, found in almost every PC today, can be used for general purpose parallel programming by writing CUDA applications, which can be in languages like C, C++, and Fortran.

This post aims to help those wanting to begin CUDA C/C++ programming on their own Linux machines get off to a smooth start.

Continue reading

An Overview of EDF and SRP

EDF (Earliest Deadline First) is a scheduling policy used in real-time systems and SRP (Stack Resource Policy) is a resource allocation policy in real-time systems. The two work well together to create a deterministic process and memory management system. The following is a brief overview of the policies. Exact implementation details of the policies and current research in these areas can be found online in the ACM Digital Library.

Introduction to EDF

Earliest deadline first (EDF) or “least time to go” is a dynamic scheduling algorithm used in real-­time operating systems to place processes in a priority queue. In this scheme, the scheduler is invoked at specific ‘decision points’. The decision points or scheduling events occur whenever a job finishes, or a new job is released and added to the ready queue. At these points, the scheduler is invoked and the ready queue will be searched for the process closest to its deadline. This process is then selected for execution and dispatched.

Intuitively, this policy of dispatching jobs that are closest to the deadline means that jobs with earlier deadlines have a higher priority than jobs with later deadlines. This can be explained in a simple, easy-to-remember phrase: “tasks that are due sooner are done sooner”.

EDF is an optimal scheduling algorithm on preemptive uniprocessors. An optimal scheduler means that if a feasible schedule exists for a certain task set, then that scheduler will be able to execute it without any deadline misses. EDF can guarantee that all deadlines are met provided that the total CPU utilization is not more than 100%. However, when the system is overloaded, some or all jobs of tasks in the task set will miss their deadlines. This set of jobs that will miss deadlines is mostly unpredictable, and it depends on the conditions of overload i.e. the exact deadlines and the time of overload.

So, a schedulability test must be done on the task set to determine if it is schedulable or not. If there is a possibility of a job missing its deadline, then the task set is reported as not 100% schedulable (depending on the conditions of overload), and EDF cannot give any guarantees about jobs adhering to their deadlines. In this case, in a hard real-time system, some of the tasks will have to be excluded from the task set in order to guarantee schedulability for the remaining tasks. In systems whose failure can cause catastrophes, it is essential to ensure that every possible task set that can execute on the system will be 100% schedulable with no deadline misses. Some degree of error can be tolerated in soft real-time systems where adhering to the deadline is a measure of performance, but missing the deadline is not particularly catastrophic.

Continue reading

Running DiskSim 4.0 On 64-Bit Ubuntu

DiskSim is a disk simulation software used for I/O analysis research. It is used as a research tool instead of a commercial tool, and hence requires some tweaking to run on your system.

This is a guide to installing and running DiskSim 4.0 on 64-bit Ubuntu machines. 64-bit machines require a patch, as DiskSim is originally intended for 32-bit machines. Also, some changes need to be made to certain Makefiles. Without making the changes, the compilation of the DiskSim code fails.

Firstly, download the 64-bit patched code for DiskSim 4.0 as a zip file from the following GitHub repository: DiskSim 4.0

Unzip the contents into any directory. For convenience, use the user home folder (as has been done in the following example for the rest of this post).

Now, cd into the directory disksim-4-0-x64-master from the terminal. When you run ls, you should see a list of folders such as diskmodel, doc, etc.

The libddbg, libparam, and diskmodel directories require no changes to be compiled correctly. However, memsmodel and src directories will require a few modifications to work correctly. So, run the following commands in sequence:

Continue reading

Recompiling Your Linux Kernel

This is a brief guide on how to recompile your Linux kernel, based on Ubuntu.

Please note, this post only deals with recompiling your kernel (especially for first-timers) with no configuration changes and no upgrades to a different version. This is not aimed at more advanced users, rather, this is only a guide for people who are planning to give their first shot at recompiling the kernel.

To get the current kernel version on your Ubuntu, type the command:

uname -r

Recompiling the kernel will require some features to be installed (if you don’t have them already), so run:

sudo apt-get install build-essential

For making configuration changes with menuconfig, Ncurses needs to be installed, but it is not required in this particular guide.

Now, login as root user by typing:

sudo su (for Ubuntu)

The following commands for recompilation must be executed as root.

Change your current working directory to /usr/src:

cd /usr/src

Before proceeding, see the list of folders that are already there in /usr/src by running ls. This is because later, a new folder will get added to this directory, and you must be able to identify it.

Continue reading

Diversity in Smartphone Usage

This is a short summary or overview I wrote after reading a conference paper from the ACM Digital Library. The original paper can be found here: Click here

Diversity in Smartphone Usage

Experimental Setup

A study was conducted to analyse the variation in smartphone use across a variety of users. Detailed usage traces were collected from 255 users, grouped into two datasets.

Dataset 1 – Consisted of 33 Android users out of which 17 were research workers and 16 were students. A custom logging utility was deployed on HTC Dream smartphones with unlimited plans. The logger collected detailed data such as the state of the screen, start and end times of calls, application interaction time, network traffic, and battery level. 7-21 weeks of data was gathered per user, with the average being 9 weeks. The logger ran in the background, keeping data records in a local SQLite database on the phone, and uploading them only when the phone was being charged.

Dataset 2 – Consisted of 222 Windows Mobile users across different demographic and geographic locations. The demographic categories were: SC (social communicators; required voice and text), LPU (life power users; needed a multi-function device), BPU (business power users; needed advanced phone for business), OP (organiser practicals; needed a simple device for management). 8-28 weeks of data was gathered per user by a third party, with the average being 16 weeks. Traces were collected using a logger that recorded start and end times of applications using API calls.

Continue reading