The code as it stands is not very useful. After you zoom into an object, you can't really pan around to examine it. So we now need to implement panning. This part gets a little tricky. The reason is that we now have to keep track of two types of touch events: the drag event, and the zoom event. To do this, we have to start looking at the different events that can be generated when the user touches the screen:
@Override
public boolean onTouchEvent(MotionEvent event) {
//This is the basic skeleton for our code. We examine each of the possible motion-events that can happen
switch (event.getAction() & MotionEvent.ACTION_MASK) {
case MotionEvent.ACTION_DOWN:
//This event happens when the first finger is pressed onto the screen
/*
* ... code to handle this event ...
*/
break;
case MotionEvent.ACTION_MOVE:
//This event fires when the finger moves across the screen, although in practice I've noticed that
//this fires even when you're simply holding the finger on the screen.
/*
* ... code to handle this event ...
*/
break;
case MotionEvent.ACTION_POINTER_DOWN:
//This event fires when a second finger is pressed onto the screen
/*
* ... code to handle this event ...
*/
break;
case MotionEvent.ACTION_UP:
//This event fires when all fingers are off the screen
break;
case MotionEvent.ACTION_POINTER_UP:
//This event fires when the second finger is off the screen, but the first finger is still on the
//screen
/*
* ... code to handle this event ...
*/
break;
}
detector.onTouchEvent(event);
/*
* ... code ...
*/
return true;
}
Using these events, we can now decide if we need to pan or zoom. So let's use a variable called mode
to keep track of our mode:
public class ZoomView extends View {
private static int NONE = 0;
private static int DRAG = 1;
private static int ZOOM = 2;
private int mode;
...
...
@Override
public boolean onTouchEvent(MotionEvent event) {
switch (event.getAction() & MotionEvent.ACTION_MASK) {
case MotionEvent.ACTION_DOWN:
//The first finger has been pressed. The only action that the user can take now is to pan/drag so let's
//set the mode to DRAG
mode = DRAG;
...
break;
case MotionEvent.ACTION_MOVE:
//We don't need to set the mode at this point because the mode is already set to DRAG
...
break;
case MotionEvent.ACTION_POINTER_DOWN:
//The second finger has been placed on the screen and so we need to set the mode to ZOOM
mode = ZOOM;
break;
case MotionEvent.ACTION_UP:
//All fingers are off the screen and so we're neither dragging nor zooming.
mode = NONE
...
break;
case MotionEvent.ACTION_POINTER_UP:
//The second finger is off the screen and so we're back to dragging.
mode = DRAG
...
break;
}
detector.onTouchEvent(event);
/*
* ... code ...
*/
return true;
}
}
I appreciate how clear you tried to explain everything. Thank you!
I want to do a “zoomable paint”, I mean a paint that I can zoom/zoom out and pan/drag the canvas and then draw on it.
I have a problem that I can’t solve: when I draw while the canvas is zoomed, I retrieve the X and Y coordinate and effectively drawing it on the canvas. But these coordinates are not correct because of the zoomed canvas.
I tried to correct these (multiply by (zoomHeigh/screenHeight)) but I can’t find a way to retrieve where I must draw on the original/none-zoomed screen
Whenever I try to implement above method the screen jitters on account of zoom effect but does not zoom essentially. I am drawing circle points on canvas and on zooming I want them to scale further apart. I don’t know if I tend to custom draw those circle points based on dimensions of zoomed canvas why is the overall effect a mere jitter ?