Thursday, July 19, 2012

Where to put android app start-up initialization

There is no 'main' function in Android app. There is a main activity that get lunched when user clicks on app's icon, but other activities can be started as well. So, if the app need some setting up at start up, where should that be in. It turns out that you can have a main entry point that got called every time the program starts.  Simply put initialization code there and it's all good.

To use it, you have to create a new class that extends Within it you have to override onCrate method. This method will get called when app starts.
public class MyMainEntryPoint extends Application { 

    public void onCreate() {
        // init goes here
Then, in AndroidManifest.xml, you have to declare your entry point using android:name attribute under application tag



Tuesday, July 17, 2012

Display Android toast at the top of screen

Android offers an easy way to notify the users of your app of important messages in what is called toast.

It is easy to use too.  Just a few line and you are good to go.
Toast toast = Toast.makeText(context, "text to show", Toast.LENGTH_SHORT);;
By default, it will show around the lower end of the screen.  However, you can show it anywhere on screen with a bit of code.  One of my toast shows in the upper half of the screen.  The code to do just that looks like this:
Toast toast = Toast.makeText(context, "text to show", Toast.LENGTH_SHORT);
Display display = getWindowManager().getDefaultDisplay();
int height = display.getHeight();
toast.setGravity(Gravity.CENTER, 0, -(height/4));;

By the way, be aware that the user will be able to click though the toasts.  The control under that toast will get that click.

Saturday, July 14, 2012

Show Android soft keyboard programatically

One of my app is an English-to-Thai dictionary, so I want to show the soft keyboard when the user starts the program.

The problem is that if the code is called from onCreate(), sometime the soft keyboard won't show.  I guess internally something is not quite ready.  Anyway, it is easily fixed by delay it a bit.  In the code below, et is my EditText, in which they user will type the word, so my code looks like this:
et.postDelayed(new Runnable() {
    public void run() {
        InputMethodManager keyboard = (InputMethodManager)
        keyboard.showSoftInput(et, 0);
}, 200);
This code delays the execution of displaying soft keybaord for 200ms.  There must be another more elegant way, but this code works fine for all Android devices I tested, so it stays.

Friday, July 13, 2012

Android text-to-speech programming gotcha.

Implementing text-to-speech (TTS) on Android is quite straightforward.  There are tons of tutorial out there.  Just google "Android TTS tutorial" and you will find them.  However, many of them fail to point out a very important point that got me stumble for a while.

Like many Android service, TTS is shared among applications.  The Android reference says:

When you are done using the TextToSpeech instance, call the shutdown() method to release the native resources used by the TextToSpeech engine.

So, you want to be nice and release the TTS whenever your app get paused or killed.  To make thing simple, you would generally initialize a shared resource in onResume() because this function will be called both when your app is started and when it is resumed.  Similarly, you would release the shared resource in onPause(), since this function is called when the app got killed or paused.

However, if you do it with TTS, you are in trouble.  To understand why,  you need to know how to use TTS on Android first.

private void initTts() {
    Intent checkIntent = new Intent();
    startActivityForResult(checkIntent, MY_DATA_CHECK_CODE);

To use TTS on Android, you will need to check its availability before you can initialize it.  You check TTS availability by starting the checking activity as shown above.  When the checking activity returns, the function onActivityResult(...) is called and you can check the result like this:
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
    if (requestCode == MY_DATA_CHECK_CODE) {
        if (resultCode == TextToSpeech.Engine.CHECK_VOICE_DATA_PASS) {
            tts = new TextToSpeech(this, this);
        else {
            // missing data, install it
            Intent installIntent = new Intent();
If the resultCode contains the value that you pass to the intent when you start the checking activity, then TTS is available.  If it's not, you can start another activity to ask the user if they want to install TTS data.

If TTS is available, then you can safely initialize TTS by create new TextToSpeech object.  You need to pass the class that implements OnInitListener as a parameter.  Class that implements OnInitListener must implement the function onInit(int status), which is called by TextToSpeech constructor to let the application knows whether initialize of TTS is successful.

public void onInit(int status) {
    if (status == TextToSpeech.SUCCESS) {
        imgTts = (ImageView) findViewById(;
This is all you need to do to use TTS in your app.  So what's the problem?  If you look at the first step, where we need to start a new activity to check the availability of TTS, you will see that we can not put this step inside onResume!  Why, because starting new activity pauses the current activity.  Since you also release TTS in onPause(), you would release TTS as soon as you try to initialize TTS.  Not good. It's getting worse because when the checking activity is done, onResume() is called, resulted in your app trying to initialize TTS again, and it goes into an infinite loop of initialize/release the TTS.

You can try to fix the problem by having state variable to keep track of where the initialization is at, but for me, I just do what I saw in code on Android Development site (before it got faceliftted, I could not find it there anymore) and put initialize code in onCreate and release code in onDestroy.

Hope it helps.

Thursday, July 12, 2012

Design Android application icon using Inkscape

We programmers suck at graphic design, which is why in Google Play you can find lots of icons drawn with MS Paint or something like that :)

While we will never be good at this thing, it doesn't mean we have to settle with sub-par icons. In this post I will show you how I made this icon in Inkscape using only a few basic drawing tools, all in about 15 minutes.  It's not that good compared to the professionally-made ones, but I think it looks better that a lots of icons on Google Play.

Why using Inkscape, and not, say, Gimp?  Because Inkscape is a vector graphic drawing program, which allow us to resize the images without loosing quality.  This is important because Android needs different sizes of the icon to support different screen resolution.

Step 1:  Create basic shape
node-editing tool
I draw a glass using 3 basic shape.  One is a round rectangle cut in half, other two are also originally rectangle that got modified: first converted Path (menu Path->Object to Path) and, second, modify the path by dragging the nodes inside the path using  node editing tool.  Then I using a Union tool (menu Path->Union) to combine them into one object (Fig. 1).

Fig. 1. create basic glass shape

 Step 2: Add border
Next I add the the border around the glass by
  1. Make a clone of the glass (Ctrl-C, Ctrl-V works fine)
  2. Select the clone object and go to menu Path->Dynamic offset.  You will see a single drag handle (see picture below, the second one from the left)
  3. Drag that handle outward a bit
  4. Change the color to light gray (we will change it to white later, but we use light gray here so we can see it against white background)
  5. Put the original glass on top of it.
The whole process in this step can be summed up in Fig. 2.

Fig. 2. add border

Step 3: Add highlight
Fill and Stroke Dialog
So far the glass looks flat.  Some highlight would add depth to the image.  First I draw the round, white rectangles on the glass and select them.  Then I open up the Fill and Stroke dialog and adjust their blur and opacity levels there until they look nice.  The result is shown in Fig. 3.

So far we are only using basic drawing tool and adjusting opacity and blurriness of the image.  Guess what,  these are all we need to know for the rest of the tutorial!
Fig. 3. add highlight

Step 4:  The background
Now we will create the lime background.  See fig. 4 along these steps:
  1. Create the rounded rectangle.  The color doesn't matter now, but the border should be black.
  2. Clone it, then use Fill and Stroke dialog to blur it.
  3. Put the blurred one under the original one.  You should see the drop-down shadow effect. Adjust it to your liking.
  4. Change the color of the original one to lime-green and adjust its opacity to about 70%, then change the border color to white.

Fig. 4. icon background

Step 5: Background texture
Well, it's not really a texture, just some washed-out color, but it makes the background less boring.  I drawn a white circle and squeeze it down, then blur and adjust its opacity.  You should get the result similar to what is in Fig. 5.
Fig 5. background washout

Step 6: Put things together
Now that the background is ready, I put my glass on top of it.  Again, I used Fill and Stroke dialog to adjust the glass opacity so that it blends nicely with the background.  Don't forget to change to color of the glass border to white first.
Fig 6. put things together

Step 7: Add some liquid
I think it looks good already, but someone might mistake our glass for a microphone or something.  Lets add some liquid in it.  First I draw a circle and cut it in half, then place it on top of the glass, but under the highlight (and, can you guess, adjust its opacity).  See fig. 7.

Fig 7. add liquid

Here it is. We are done.  Now you can export it into a PNG of any size as required by your apps.  By the way you can download the finished icon in svg format here.   See, even if all you know about  Inkscape is to draw basic shapes, to change the colors, to drag some nodes, and to adjust blurriness and opacity, you can still get a pretty good result :)

Wednesday, July 11, 2012

frame-by-frame animation on Android using AnimationDrawable

While developing Redcard (which got started by a joke among friends while watching Euro 2012) I wanted to have a small animation alternating between two images mimicking a referee hand gesture, telling the player to shut-up (there is sound effect too. It uses text-to-speech, but that is for another post).

In Android API reference I saw the class AnimationDrawable, which looks like a perfect solution.  The class is described like this:

The simplest way to create a frame-by-frame animation is to define the animation in an XML file, placed in the res/drawable/ folder, and set it as the background to a View object. Then, call start () to run the animation.
An AnimationDrawable defined in XML consists of a single <animation-list> element, and a series of nested <item> tags. Each item defines a frame of the animation.

It sounds simple enough but it took me while to get it to work.  Here's what I needed to do to get it working.

First I defined my animation in /res/anim/my_anim.xml

<?xml version="1.0" encoding="utf-8"?>
  <item android:drawable="@drawable/frame_1" android:duration="500"/>
  <item android:drawable="@drawable/frame_2" android:duration="500"/>

framd_1 and frame_2 are png images stored in drawable directories.  There are different sizes to support different screen resolutions.

Then, in an activity layout, I defined an ImageView like this

 android:background="@anim/my_anim" />

Notice the background, which points to my animation XML.

In my activity source code, I obviously needed to define the ImageView.  I also defined mMyAnim, which is of type AnimationDrawable.

private ImageView mImgMyAnim;
private AnimationDrawable mMyAnim; 
mImgMyAnim will be referencing the ImageView previously defined in the layout.  It got done in onCreate()

mImgMyAnim = (ImageView) vgFinger.findViewById(;
mMyAnim = (AnimationDrawable) mImgFinger.getBackground();
Notice that mMyAnim is assigned the background of the ImageView (to which the actual animation is pointed).

The setup is completed.  To start the animation, simply call start()
The stop() function stops the animation.

Tuesday, July 10, 2012

YUV420 to bitmap conversion in Android

There are a couple methods to retrive image data from camera.  You could start built-in camera app from your activity, or you could call the camera API to take a picture and return it to your activity as a JPG.  However, sometimes you don't need a full-resolution image. In that case, you can just grab the preview image from the camera preview screen.  You can do that by calling Camera.setOneShotPreviewCallback(f) where f is a callback function Camera.PreviewCallback().  Soon after the function is executed, your callback functions is called and passed to it the image data.

    new Camera.PreviewCallback() {
        public void onPreviewFrame(byte[] data, Camera camera) {
            // do something is image data

However, the image data is in YUV420 format.  The API actually lets you specify other formats, but most phones only support YUV420.

Support of YUV420 in Android is limited.  You will have to convert it to bitmap first before you can do something useful with the image.  The class YuvImage lets you convert YUV420 data to JPG, but not much else, but you can convert JPG to bitmap like this.

android.hardware.Camera.Parameters param;
int format = param.getPreviewFormat();
YuvImage image = new YuvImage(data, format, width, height, null);
ByteArrayOutputStream out = new ByteArrayOutputStream();
image.compressToJpeg(rect, 100, out);
Bitmap bmp = BitmapFactory.decodeByteArray(out.toByteArray(), 
                                           0, bs.length);
Obviously, this is not the best way to go around.  It is reasonably fast (the conversion code is JNI-wrapped), but the image quality suffers.  I found the resulted images lost contrast and color is dimmed.   You can see the difference in the image below (left is the result obtained by converting YUV420 -> JPG -> bitmap, right is when decoding YUV420 directly to bitmap, more about this later)
Another method is convert YUV420 to bitmap directly.  There are code out here, like one in this post at stackoverflow, which works well.

public int[] decodeYUV420SP( byte[] yuv420sp, int width, int height) {
       final int frameSize = width * height;
       int rgb[]=new int[width*height];
       for (int j = 0, yp = 0; j < height; j++) {
           int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
           for (int i = 0; i < width; i++, yp++) {
               int y = (0xff & ((int) yuv420sp[yp])) - 16;
               if (y < 0) y = 0;
               if ((i & 1) == 0) {
                   v = (0xff & yuv420sp[uvp++]) - 128;
                   u = (0xff & yuv420sp[uvp++]) - 128;
               int y1192 = 1192 * y;
               int r = (y1192 + 1634 * v);
               int g = (y1192 - 833 * v - 400 * u);
               int b = (y1192 + 2066 * u);
               if (r < 0) r = 0; else if (r > 262143) r = 262143;
               if (g < 0) g = 0; else if (g > 262143) g = 262143;
               if (b < 0) b = 0; else if (b > 262143) b = 262143;
               rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000)
                       | ((g >> 2) & 0xff00) | ((b >> 10) & 0xff);
       return rgb;   }
This funtion returns an array of RGB image data that you can construct the bitmap out of it like this Bitmap bmpx = Bitmap.createBitmap(rgb, width, height, Bitmap.Config.ARGB_8888); However, this function is quite slow.  Given that my program only need a small portion at the center of image, it is wasteful to decode the whole image only to crop most of it out later.  So I modified the function to accept the rectangle bounding the area I want decoded.  This is the modified function.

public int[] decodeYUV420SP(byte[] yuv420sp, int width, 
                            int height, Rect rect) {
     final int frameSize = width * height;
     int ow = rect.right - rect.left;
     int oh = rect.bottom -;
     int rgb[] = new int[ow * oh];
     for (int j = 0, yp = 0, ic = 0; j < height; j++) {
         if (j <= {
             yp += width;
         if (j > rect.bottom)
         int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
         for (int i = 0; i < width; i++, yp++) {
             int y = (0xff & ((int) yuv420sp[yp])) - 16;
             if (y < 0)
                 y = 0;
             if ((i & 1) == 0) {
                 v = (0xff & yuv420sp[uvp++]) - 128;
                 u = (0xff & yuv420sp[uvp++]) - 128;
             if (i <= rect.left || i > rect.right)
             int y1192 = 1192 * y;
             int r = (y1192 + 1634 * v);
             int g = (y1192 - 833 * v - 400 * u);
             int b = (y1192 + 2066 * u);

             if (r < 0) r = 0; else if (r > 262143) r = 262143;
             if (g < 0) g = 0; else if (g > 262143) g = 262143;
             if (b < 0) b = 0; else if (b > 262143) b = 262143;

             int rx = 0xff000000 | ((r << 6) & 0xff0000)
                    | ((g >> 2) & 0xff00) | ((b >> 10) & 0xff);
             rgb[ic] = rx;
     return rgb; } 
 With this code,  my app (English->Thai OCR dictionary) can decode the YUV420 in reasonable time and still have better image quality.

Monday, July 9, 2012

Writing camera app on Android is hard!

A few months ago I experimenting with tesseract, which is an OCR library with Android JNI wrapper.  My app would be taking picture of English words, and then translate them to Thai using the dictionary database I have already implemented for another English-Thai dictionary app.

While there are quite a few Android camera API tutorial out there, none of them is actually completed.  I found that the app worked OK until it got paused (because user switched to another app, screen went timeout, etc), and when it resumed, it would freeze.  Finally I decided to look at the source code of the official camera app.  I found that, well, it is simply impossible to write an camera app without deep insider knowledge of Android inner working. 

For example, on typical Android app, you would put stuff to, say, re-start your app again in onResume().  However, this is the comment in onResume() in Android camera app.
// Don't grab the camera if in use by lockscreen. For example, face
// unlock may be using the camera. Camera may be already opened in
// onCreate. doOnResume should continue if mCameraDevice != null.
// Suppose camera app is in the foreground. If users turn off and turn
// on the screen very fast, camera app can still have the focus when the
// lock screen shows up. The keyguard takes input focus, so the caemra
// app will lose focus when it is displayed.
See how many gotchas you have to handle within onResume() of the camera app?  This is just one function that you need to be concerned.  There are lots of gotchas all over the places.  Good thing Android is open-source, otherwise I (and most people outside Google office) will never be able to write an acceptable camera app.  The source code for Android camera app is quite huge though, since it has to handle both still picture and movie mode.

Anyway, you can get Android camera app source code here.