Wednesday, November 20, 2013

Websites which helped me a lot

Computer technology

Good programming tutorials, Java, python etc
www.zetcode.com

Emulation tutorials covers NES, GB.
http://codeslinger.co.uk/pages/blog/wordpress

OpenGL, 3d game tutorials, NeHe ( Neon, Helium)
http://nehe.gamedev.net/
http://www.gamedev.net/

Game competition
http://www.ludumdare.com/

Indie Games
http://www.tigsource.com/

Game development resources like arts, music

For geek computer users
http://www.geek.com/

Good Linux stuff, books etc.
http://www.linuxtropia.org/




Science

Very good tutorial on Waves and signals
http://www.doctronics.co.uk/signals.htm

Science projects - for science lovers
http://www.scitoys.com/




Music

Mod, XM, IT, S3M Module musics
http://www.modarchive.org/

Game music remixes
http://www.ocremix.org
http://remix.thasauce.net

Game music collection
http://www.gamemp3s.net

Snes music collection
http://snesmusic.org/v2/

Emulators, console musics, roms utils etc
http://www.zophar.net




Arts

Ocean of artists
www.deviantart.com




Hardware

Hardware projects, circuits, pin-outs
http://www.epanorama.net/
http://www.pinouts.ru/




Books

http://safari5.bvdep.com/
http://www.gutenberg.org/
http://www.bartleby.com/



Research papers
http://arxiv.org/



Others

Lots of free contents of almost everything, must visit
http://www.freebyte.com/



Thursday, November 7, 2013

Differentiate rotation and pinch gesture





For guessing gestures manually in programming i have found by logging how IOS sdk does it internally. As we can see in the below picture -





















Gesture is decided by first motion of fingers. But in many cases a blended approach can also be used. Means on X-axis motion doing rotation and Y-axis motion doing Pinch.

There is nothing more to write because other methods can use filtering and algorithm to smartly differentiate gestures.  Like managing a Rectangular band over the two touch points and calculating the motion of fingers strictly inside rectangular boundaries.

We know, if we don't use first motion, then there will be delay in gesture recognition.


Automatic Rectangle-Rectangle collision and response using sweep

I love side-scrolling platformers and have built my game engine, GrehGameEngine, focusing it. We all know first layer of collision detection is always rectangle based because its faster than other geometrical shapes.

While developing games I tried to Google as much as I can regarding rectangle collisions. But I couldn't find single-complete algorithm for finding collisions and deciding response. So after many models and calculations in 2012 I developed this method. It is actually a sweep collision method. So will be helpful for most of game developers.

Using the following illustration I will describe the method.



We have a player R1 and a platform R2. R1 has to move dx,dy from START (x1,y1) and reach STOP ( x2,y2 ). But collision with R2 must stop it remain at EXPECTED (x3,y3).

Here we have constructed a situation, but in reality we don't know where expected would be. This is what this method is for. Below is the step-wise approach to calculate x3,y3.


STEP #1:
Get the distance between R1 & R2 and also check which projection axis already collides. In our case, no axis collides. What does this mean? Look from any sides around these rectangles they will not be overlapping. They are separate.

Distance has to be calculated between their sides which face each other. In our case R1.right and R2.left face each other.

DistX = R2.left – R1.right , DistY = R2.top – R1.bottom



STEP #2:
Convert the distance into rational values by dividing them with their DELTA movement values. In our case it will be.

distX_percent = DistX / dx , distY_percent = DistY / dy
NOTE: handle divide by zero dx = 0, dy = 0 before it.

Why rational / percent?, have a look at this -



























STEP #3:
Check which projections were colliding. If both projections were already colliding then R1 and R2 are already collided, STUCK situation!, what is the point of moving dx,dy now. Figure out how to avoid this stuck state!, it happens when R1 is created overlapped with R2 OR a failed last collision response which should not happen.


STEP #4:
Now the final step. Look again at figure 2, and then figure 1. Assume Point A of figure one as EXPECTED position in figure 1. Means db = DistX, dp = DistY and dh = EXPECTED delta move.

Imagine R1 is slowly moving towards R2 with dx,dy and collides with R2 at expected. This is visual!, how to get it by maths?. The answer is already in our hands. We have to test which projection satisfies collision in both axis.


We take the DistX_percent values first and apply the formula dp% = db% = dh%.

new_DistX = DistX_percent * DX. (Its not needed because new_DistX = DistX already)
new_DistY = DistX_percent * DY. ( db% = dp%, getting db% of DY ).

Translate R1 to the new values and check if it collides with R2. If collides then expectedXY = new_DistXY. If it doesn't collides. Try DistY_percent to get new_DistX.

new_DistY= DistY_percent * DY. (Its not needed because new_DistY = DistY already)
new_DistX= DistY_percent * DX. ( db% = dp%, get dp% of DX ).

Again translate R1 to these new X,Y, if it collides with R2 then expectedXY = new_DistXY. If it doesn't collides. This means there is no collision in this delta move.



Here is the method in action. Its HTML5 version which can easily be converted to C++/Java etc. Whole code can be found inside the JavaScript section of this html file.

Download Or Open rect-collision.html


In the above demo, Green rectangle is R1 (start), Blue rectangle is platform R2, Brown is STOP and red is the expected result rectangle. Left click / Drag to move STOP position and Right click to set START position.



Cleaner illustration:














Figure 3


In above figure, we can see the distance between R1 (green) and R2 (blue). How do we calculate EXPECTED position i.e. DistH.

We take DistX as our dx delta move and find dy using db% = dp% relation.
new_DX = DistX% * DX
new_DY = DistX% * DY

Now translate R1 to this new position and check collision If it collides then this is our DistH. Otherwise we take DistY as dy delta move and find dx using distY.
new_DX = DistY% * DX
new_DY = DistY% * DY

Then same collision check again. if collides we get DistH => new_DX, new_DY.
If it doesn't collide again then the DX,DY delta move is considered safe delta move without collision.



Tuesday, October 29, 2013

GIMP for digitalization of sketches

GIMP is a very powerful graphics software. I use it for 95% of my graphics work. In this article i will show how to digitalize a sketch. I discovered this method when i wanted a digital pen to work. So decided to use raw sketches since they are natural and powerful way of sketching what we have in our mind. Also i always prefer doing natural way to work, being away from screen whenever possible. I am also in favor of saving power so try to utilize methods which are not dependent on electricity.

Things and skills needed:
  • My setup is: ArtistX Linux (modified Ubuntu). Acer e-cmachin EMD-644 AMD APU
    powered laptop. It has less than 25 watts of power consumption.
  • GIMP 2.8 or above if available. 2.6 will also work.
  • Basics of GIMP.

Below are the methods in steps.

  • Take a picture of your sketch using good camera as clean as possible with big contrast ratio (means dark and bright should have higher difference).
  • To get good contrast use plain white paper and black ink.
  • To get regular, uniform surface try putting a glass panel over the paper. Make sure that reflection does not disturb the shot Or simply use a scanner if you have one.


Now the digitization process. To let this process success even in worse cases i am using low quality sketch in resource constraint state of work. This image is bit blurred and has thick pen. Still we will achieve our goal.
















Start GIMP and load your sketch. Make a copy of original layer, set original invisible and locked. Work on copy.

Method #1:
  1. Click: Colors => Threshold => Move the slider to choose the best output you need.
  2. Try to remove noise/unwanted colors by using other options like Brightness-Contrast, Levels, Curves, Desaturate. Then try step 1 again.

Method #2:
  • Click Filters => Artistic => Photocopy.
  • Play with mask radius to get how much sketch ink you want in output.
  • Keep sharpness higher but its not a rule in every case.
  • Keep % black and % white higher to almost 100%. i.e. 1.0


Extra Methods for enhancement of output. Apply before using Methods 1 & 2:
  • Color => Auto => White balance, normalize, Stretch contrast, HSV.
  • Color => Color to Alpha => Select the color of paper. Find color of paper using color picker.
  • Filters => Enhancement => Unsharp Mask. Play with radius and little bit of other options.
  • For some cases: Filters => Blur => Pixelize and unsharp mask (above) can be used in respective order to get nice results.


We should only rely on Extra methods to enhance the sketch to get best results from Methods 1 & 2. Below is the output of using above techniques.

















I hope this helps in your sketch work. If it helped please say thanks to GIMP first :), its opensource and needs lots of support and funds. Least we can do is to make it grow!.




Download this tutorial in PDF format for offline use and printing:GIMP_for_digitalization_of_sketches.pdf.7z



Search TAGS:
How to convert sketch into digital image, convert paper into digital black and white format, sketch digitization, digitalization.


Wednesday, September 4, 2013

What i have learnt about universe?



Life is just a visual form to the compatible visual forms both created by sound automatically... Life is entropy.

This is what Sanatan Dharm knew lacs of years ago.... This is why sound, mantra, yantra (visualization of mantra) are so important in sanatan dharm( hinduism, buddhism ).

Chanting AUM makes us connect with universe and be stable. More information i read in Mundak Upanishad. Please refer to the same.


Please watch the below video, Experiments done by Hans Jenny which visualize sounds creating shapes. This is how our world has came into existence called "MAYA", means illusion. Already described in Upanishads many centuries ago. However modern science keep talking about particles. So keep finding particles!, Good Luck!


According to Sanatan Dharm (misnamed and edited badly to now into Hinduism) there are two types of Knowledge

1. Higher Knowledge, ( Brahm Gyan )
2. Lower Knowledge, ( Vedic knowledge of Music, dance, Arts, science etc )

The biggest difference between the two is Brahm Gyan can only be gained by meditation, it can be known by books but cannot be felt or illustrated. Just like I tell you I ate a fruit and how it tasted. And the biggest thing, without our inner connection with higher knowledge we cannot use it. Means Brahm Gyan is nearly useless to a common human. Once a person knows Brahm Gyan, he/she comes out of maya. Which results, firstly they loose their interest in worldly things. They start to feel content, satisfied, live simple.

The lower knowledge is infinite, get created every time. Like atoms make compounds, compounds are either liquid, gas, solid... Bla bla blah.

One who learns how to control waves by himself or highly advanced device can play with MAYA. It was present in our ancestors, who used to say mantras and control the MAYA forms around. Its like Playing music from a Sitar/Guitar.

That is why there is heavy use of WAVES, SOUND words in Sanatan Dharm. Like Asura ( devil, evil etc ) which means "A + Sur" = noise Or opposite of sur / Harmonics.
 Bad sounds, noises are negative energies of universe, means chaos. Same goes to its manifestations like bad odour, lies, abuse, cheating etc -ve things.

Since nothing is actually created, how can it be destroyed? Only forms change, and change is inevitable. Why people do not think about waves being everywhere? Is there anything void of waves? Nowhere.


What are the basic attributes of Universe?
Many people say its all science, if we don't see anything we cannot dream about them. First of all, senses are worldly things. The base attribute is feeling. Feeling is life.

Feelings Maya'tic manifestations - visual, Aural, Temperature, Touch.


Suppose from world light, sound and temperature ( better say Temp. differences ) are removed. What will happen?

A: We will not exist, even if we do.


For now i am not having time to write very much here, may be i should write whole book over the subject. Before i end it, below are some important things we should think about.

The way 1 and 0 create whole mathematics, same way nature is, nodes and anti-nodes created by waves. Who gives energy to this wave...no one knows, atleast i don't know.

One who chants AUM, connects with Brahm. Mudak upanishad says chanting AUM gives us energy, this created energy helps our soul to handle the infinite loop of life.

Below is analogy:
Suppose you died, for this world you do not exist, but YOU come out of dream somewhere else with same people and say i had a bad nightmare. you love that place and forget your dream. Well this is just an example to let you feel how much powerful MAYA is!.

According to Mundak Upanishad i read, after death the feeling of "ME / I" still exists, we don't own a body, only feelings. According to our actions we see a blank (of-course dark) Universe. No directions, no sound, no light, no gravity, no temperature. There is only emptyness and loneliness.

You can only feel extreme pain. Those who chanted AUM, did good karma get light early to exit this state. bad karma suffer.






Saturday, July 6, 2013

Power of Two image resizer

This java app automatically resizes images to closest power of two making them suitable for OpenGL/GLES. Useful for mobile developers using OpenGLES.


Usage: java -jar ClosestPOTResizer.jar in.png OR in.jpg


Its just 7KB app. Runs on java 1.2 or above. The download package contains source code of the app and inside distributable folder the executable jar is present.

Download: ClosestPOTResizer.7z


Source code:

/* ##################################################### */

package closestpotresizer;

import java.awt.Graphics2D;
import java.awt.RenderingHints;
import java.awt.image.BufferedImage;
import java.awt.image.IndexColorModel;
import java.io.File;
import java.io.FileInputStream;
import java.util.Iterator;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.imageio.IIOImage;
import javax.imageio.ImageIO;
import javax.imageio.ImageTypeSpecifier;
import javax.imageio.ImageWriteParam;
import javax.imageio.ImageWriter;
import javax.imageio.plugins.jpeg.JPEGImageWriteParam;
import javax.imageio.stream.ImageOutputStream;

/**
 *
 * @author Bindesh Kumar Singh
 * @contact bindeshkumarsingh@gmail.com
 * @website http://www.ourinnovativemind.in
 */

public class ClosestPOTResizer {

    /**
     * @param args the command line arguments
     */
    public static void main(String[] args) {
//        System.out.println("" + POTResizer.getClosestPOT(500, false));
//        System.out.println("" + POTResizer.getClosestPOT(500, true));
        try {
            if (args.length < 1) {
                System.out.println("Usage: java -jar ClosestPOTResizer.jar \"in.png OR in.jpg\"");
                return;
            }
            POTResizer r = new POTResizer();
            r.resize(args[0]);
        } catch (Exception ex) {
            Logger.getLogger(ClosestPOTResizer.class.getName()).log(Level.SEVERE, null, ex);
        }
    }

}

class POTResizer {
   
    public void resize(final String infile) {
        try {
            BufferedImage img = ImageIO.read(new FileInputStream(infile));
           
            // get extension
            int len = infile.length();
            String extension = infile.substring( infile.lastIndexOf('.') + 1, len );
            String outfile = "POT_" + infile;
           
            // get current size
            int w = img.getWidth();
            int h = img.getHeight();

            // closest POT
            int potW = getClosestPOT(w, true);
            int potH = getClosestPOT(h, true);

            // resize to POT
            img = getScaled(img, potW, potH);

            // save resized image
            storeImage(img, new File(outfile), extension, 0.9f);

        } catch (Exception exp) {
            Logger.getLogger(POTResizer.class.getName()).log(Level.SEVERE, null, exp);
        }
    }

    public boolean storeImage(BufferedImage bi, File outputFile, String extension, float quality) {
        // e.g. storeImage( image, new File( "file.png" ), BufferedImageUtil.IMAGETYPE_PNG, 0.8f);
        try {
            //reconstruct folder structure for image file output
            if (outputFile.getParentFile() != null && !outputFile.getParentFile().exists()) {
                outputFile.getParentFile().mkdirs();
            }
            if (outputFile.exists()) {
                outputFile.delete();
            }
            //get image file suffix
            //get registry ImageWriter for specified image suffix
            // BufferedImageUtil.IMAGETYPE_PNG, 0.8f)
            Iterator writers = ImageIO.getImageWritersBySuffix(extension);
            ImageWriter imageWriter = (ImageWriter) writers.next();
            //set image output params
            ImageWriteParam params = new JPEGImageWriteParam(null);
            params.setCompressionMode(ImageWriteParam.MODE_EXPLICIT);
            params.setCompressionQuality(quality);
            params.setProgressiveMode(javax.imageio.ImageWriteParam.MODE_DISABLED);
            params.setDestinationType(new ImageTypeSpecifier(IndexColorModel
                    .getRGBdefault(), IndexColorModel.getRGBdefault()
                    .createCompatibleSampleModel(16, 16)));
            //writer image to file
            ImageOutputStream imageOutputStream = ImageIO
                    .createImageOutputStream(outputFile);
            imageWriter.setOutput(imageOutputStream);
            imageWriter.write(null, new IIOImage(bi, null, null), params);
            imageOutputStream.close();
            imageWriter.dispose();
            return true;
        } catch (Exception e) {
            e.printStackTrace();
        }
        return false;
    }

    static public int getClosestPOT(int number, boolean higher) {
        int ret = 2;
        while (ret < number) {
            ret *= 2;
            if (!higher && ret > number) {
                ret /= 2;
                return ret;
            }
        }
        return ret;
    }

    static public BufferedImage getScaled(BufferedImage img,
            int targetWidth,
            int targetHeight) {

        BufferedImage scaledImage = new BufferedImage(
                targetWidth, targetHeight, img.getType());// BufferedImage.TYPE_INT_ARGB);
        Graphics2D g2d = scaledImage.createGraphics();
        g2d.setRenderingHint(RenderingHints.KEY_INTERPOLATION,
                RenderingHints.VALUE_INTERPOLATION_BILINEAR);
        g2d.setRenderingHint(RenderingHints.KEY_ANTIALIASING,
                RenderingHints.VALUE_ANTIALIAS_ON);
        g2d.drawImage(img, 0, 0, targetWidth - 1, targetHeight - 1, null);
        return scaledImage;
    }
   
   
}

/* ################################################### */



TAGS:
Power of two image converter resizer POT

Monday, July 1, 2013

Make sprite sheet from frame files with auto-crop and merge.

My another tool for GrehGameEngine Tools Collection.

It takes all user provided images then auto-crops them and merges to build a sprite sheet. It also exports frame information file containing frames information in % of sheet size.

USAGE:
java -jar SpriteSheetFromPNGs.jar "1st.png" "2nd.png" "3rd.png" "#.png" ...  more files, file names are not fixed you can use any png files.

OUTPUT:
"sheet.png" with all images cropped and merged. "sheet.png.conf" with frames position and size data relative to "sheet.png". i.e. x, y, w, h of frames in % of sheet's size.


Download:
SpriteSheetFromPNGs.7z


Requirements:
It is a java app and needs Java 5.


Tags: sprite, sprite sheet,  auto crop and merge

Monday, June 17, 2013

Making vector like smooth gradients in GIMP

I always wanted to know how graphics like angry birds, CutTheRope etc are made then i came to know about Inkscape, vector arts. But i want to master few software instead of learning 100s. So i wanted to achieve the same in GIMP.

Except shape editing i find GIMP robust for most of my needs, so i surrendered using Inkscape. I use Inkscape for making shapes mostly. Now GIMP does all my job :).

I have written a simple short tutorial to do smooth gradients often found in vector graphics. This tutorial will give you some simple but powerful tips how to choose colors and opacity to play with smooth color gradients.


Below is the download link of the tutorial in PDF format for offline use:
GIMP_smooth_vector_like_gradients.pdf.7z

Or online version below:


Requirements:
  1. GIMP 2.8 is recommended, otherwise 2.6 if 2.8 is not available.
  2. Some basic knowledge of PC and graphics terms like alpha, hue, saturation etc
  3. Basic usage of GIMP or other app with layers.

The only tools and settings we will use most:
  1. Airbrush tool, opacity & size of brush.
  2. HSV instead of RGB colors,


Steps:
  1. Create new project 640x480 or above resolution with transparent background.
  2. Fill background with dark green color. [ HSV = 124, 99, 63 ]
  3. Select Airbrush tool with size = 20, opacity = 100, same above green foreground color.
  4. Select brush blurred circle, size = 51x51, Hardness 075.
  5. Create new layer and click it to make active work layer.


Things to note:
We have the same green color which will not produce any effect on the background color then why we picked it?

Because we want to use HSV which allows us great color match & closer difference in surrounding colors using Saturation & Value (lightness).

Notations:
H = Hue, S = Saturation, V = Value (lightness), O= Opacity, Sz = Size,

Back to steps:
  1. Now click the FG color box to change the color of our brush.

  2. Look at the color formats available: H, S, V, R, G, B. We will always use HSV not RGB because altering RGB values changes colors undesired way.
  3. Now we can move V slider to change darkness/lightness of a color and S slider to change the color strength. H values for choosing different colors.
  4. We will select V = 40.
  5. Now draw an irregular line (L1) in the blank layer. Now set V = 30.
  6. Use “Fuzzy select tool” and select that irregular line. Click “Menu -> Select -> To Path”.
  7. Go to path tab near layers tab (dockable layers dialog), enable it with eye button at left. Switch to Layers tab and select path tool.
  8. Select the path and move it to above-right of the last selected irregular line. Use Move(alt) option in path tool to move path. Click selection from path. Hide path again.
  9. Now fill this selection with the airbrush. V = 40, O = 50, Sz = 30. The Mouse pointer should be over the top-right border of the selection so that middle of the selection fills lighter green color. Unselect all to remove selection. Now the lines should look like a walking path (L2).













  1. Now change Brush, V = 75, Sz = 50, O = 100. Draw below L1. Will give output like this -




 
  1. Using this method we can make a smooth color gradient just like we get in vector art apps like inkscape. But GIMP gives us much more powerful control.
  2. Below is final output of what i made while writing this tutorial using some objects i made separately. Viz -




















 
The water like thing at bottom of image was just made using MODE of layer color combination not any pen or brush.

You can try it: Make new layer above your green layer, make figures or anything, from mode select saturation, value, dodge etc whichever looks cool to you.

All these gfx were made using the method i described above. You have to play with V, S, O, Sz and brushes, Use Opacity to get lighter color instead of changing V every time.

There are lots of techniques to play with but one good trick of making quick graphics is to make bigger gfx using airbrush tool and downscale it then use pixel editing if desired.


Lets make a simple grass:
  1. Create a new project 640x480 with transparent background.
  2. Now create a grass like shape with closed boundaries so that we can fill it with green color.
  3. Now select the grass shape with Fuzzy select tool and click Menu => Select => Save to channel (keep important selections saved into channels). Get back to the layers tab and create a new layer. Keep selection active!

NOTE:
Always try to keep outlines in separate layer to keep it unaltered and pick selections from them and work on new layers. It means select something in a layer using selection tools and create new layers and do editing on them. This is important so that any editing we do shouldn't spread outside of the boundaries ( the outline ) of our target object and our outlines remain available as it is.

  1. Now fill this new layer with green color with your choice of SV values .
  2. Now select Airbrush tool, choose a darker value and apply shade on left of the grass.
  3. It is important that you shade color in gradient way, means left to right => dark to light. To achieve this try not to change V, just reduce opacity to draw less darker towards right.
  4. The above shade means grass has some light source at right side. Without shading grass will be plain single color filled paint. Can you image what shade objects have at night/dark? Of course nothing but black, That is why keep smarter shading assuming all light sources including the color mixing due to nearby objects. Suppose a blue orb a left, then apply little blue reflective shading at left to produce more realistic output.

NOTE:
Lots of paintings fail due to bad shading, we should never apply anything without planning. While working always think this in such cases, why i am doing this? Even a single pixel can produce bad shade if added randomly.


  1. Now select higher V, means lighter green and apply to right side of grass. Just as done above. Below are the snap of all steps done yet -





















  1. Now select white, but!, in HSV white means V=100, S=0. Apply some white shade randomly where we assume direct light reflection. Adjust opacity to keep white level as per requirements.
  2. Remove the black outline and your grass is ready.

This result may or may not be up to your expectations. But the shading methods mentioned above will result in your desired one depending upon efforts made.























Impact of nearby objects:
Colors mix together, here Blue + Green = Cyan, therefore to let shading take practical color create new layer and apply shading with the object (blue here). The layout is blue shade layer above and the target object is below (grass here). Click blue shade layer and select MODE to “Addition”. This will add blue with the bottom layer i.e. green to automaticaly produce cyan.








TIP:Always reuse your work if possible. Using color, size variations of images.
- GIMP rocks :) -




GIMP TIP:
To master GIMP one has to become a player in Selection, selection modifications, path, and color selection. Above tutorial has a little demo of this all. Lots of things are there depending upon your needs. But instead of depending too much on filters try to make things yourself otherwise you probably will loose mastering art!

Sunday, June 16, 2013

Using GIMP for learning and working with Pixel-Art

I have been using gimp for pixel-art and it works great with almost everything i needed for pixel-art. Even more than that!

I have written a tutorial how gimp can be used for pixel-art with some techniques to enhance pixel-art work.

Please download the tutorial in PDF format for offline use:

GIMP_for_pixel_art.pdf.7z


Or online version below:




Setup GIMP for pixelart work
  1. Start GIMP ( v2.8 recommended )
  2. Create new project 32x32 with background transparent.
  3. Zoom-in till the working canvas gets screen-fit size.
  4. Select pencil tool with black foreground color (default).
  5. Choose any brush which doesn't has blur. Simply select the black circle without blur.
  6. Now set brush size to 1 pixel in GIMP 2.8 OR minimum scale in GIMP 2.6.


Now the above steps have made a pixel art environment in GIMP.

Setup color palettes
  1. Go to any of your favorite TAB ( layers tab at right is recommended ) and click [<| ] button which shows tooltip “configure this tab”. See following image:









  1. Click -> Add Tab -> Palettes. Same way add FG/BG color


TIPS:
  1. Always work in Greyscale using Greys (32) palette. Click this palette then switch to FG/BG Tab pane. You will get black to white color in order.
  2. Why greyscale only? Its because realism doesn't reside in colors but number of colors. Doesn't a black & White image look real? It does. Therefore work in greyscale without caring colors just focus on shading and dithering then after completion give colors using fills or “select by color tool”.
  3. Instead of selecting darker OR lighter color from color chooser just change the Opacity of the selected color. E.g. Use black color over lighter backgrounds with varying opacity to get desired black/grey color.
  4. Another fast way of getting similar colors to image is to pick colors from the image itself using “color picker tool”.
  5. To get dispersed pixel like spray set “Apply Jitter” checked with desired value.


How to get pixelart samples?
This can be done easily using GIMP. Following steps will let you get loads of pixelarts from images/wallpapers we have.

  1. Import/Open any camera picture in GIMP.
  2. Resize to acceptable size so that image becomes pixelated but clearly recognisable.
  3. Now click in menu Image => Mode => Indexed => Generate Optimum palette with maximum 256 colors.
  4. This will reduce total colors used by the image. This can make image low-colored but this is exactly what we wanted. Because most of pixelart is done with 256 colors only.
  5. Now look at the parts of image how the pixels are organised :)











NOTE:
Indexed image will not offer flexible alpha values, only visible OR invisible pixel are there. To edit indexed image convert it back into RGB format.


Few pixarts i made in GIMP :










Tuesday, April 2, 2013

HQ2X rescaler, java app

HQnX (hq2x, hq3x, hq4x) algorithm can enhance the quality of a pixelated image. Nice software for pixel artists.


Usage:


java -jar ImageScaler_HQ2X.jar "image_file"


Download link: contains source code (NetBeans project) and binary in dist folder.
ImageScaler_HQ2X.7z



NOTE: The HQ2X code is not mine, it belongs to

http://www.hiend3d.com/hq2x.html
Maxim Steptin

I took hq2x java code from internet and utilized to build this app.

Sprite Frame Designer - animate regions of an Atlas image

I created this app to work with areas/regions of an atlas image to create sprite. This app can let you visualize a sprite sheet by creating frames of desired areas of image. It can export the frames in % size, % position of image. Means resizing image doesn't affect the frame information. However editing positions of framed areas can corrupt the information.

 The package contains a default Sprite sheet i created using my SpriteEditor app.

Tutorial:
1. Open the app by executing "bat" file on Windows, or SH (shell script) file on Linux.
2. Now Open your desired png image with sprites.
3. Select Frame 0, using the Rectangular blinking selection choose your frame area.
4. Create frame 2, do the same.
5. Adjust the animation time by setting value in Delta (ms) field.

The rectangular marker has 2 blinking circles, Left circle moves frame, right circle resizes frame.

You can export the frame information in a text file. This file is specific to the image you opened.

Try the inbuilt animation by executing the app and then opening the walk.conf file from res directory.


Download link:
SpriteFrameDesigner.7z

Software requirements:
1. Linux / BSDs / Windows
2. JRE - java runtime Environment. JDK 5 is used for development.


Example output of frame information in walk.conf present in res directory:
/* start */ 

kull_frames = 4; /* Total frames in walk animation */
Frame_1 = [array] 0.0 0.0 26.171875 93.75 0.0 0.0 100;
Frame_# ...

Frame_4 = [array] 71.09375 0.0 28.125 97.65625 -2.734375 0.0 100;

/* Finish */


Frame_# = [array] X% Y% Width% Height% HotSpotX% HotSpotY% Delta_Milliseconds;

X% = X% of image width. Same for Y%.

Width% = Frame width in % of image width.
Height% = Frame height in % of image height.
HotSpotX,Y = Game Engine dependent thing, ignorable.
Delta = Delta draw time of this frame in milliseconds.

calculated frame pixel positions by example:
Let Image size = 200x100
And Frame_1 is X = 50%, Y = 20%, W = 100%, H = 50%.
X = 100, Y = 20, W = 200, H = 50.  

Search tags:
Sprite sheet visualizer,  sprite sheet area exporter,