animated zoom to location/map placemark when mapview opened iOS - ios6

I've seen it on other applications exmaple ios 6 starbucks, when my mapview is opened, I want it to show the region as the whole of UK/British isles, then I want it to zoom in to my specified location region points I have.
Updated Code:
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view.
[mapView setMapType:MKMapTypeStandard];
[mapView setZoomEnabled:YES];
[mapView setScrollEnabled:YES];
MKCoordinateRegion region = { {0.0, 0.0 }, { 0.0, 0.0 } };
region.center.latitude = 54.5;
region.center.longitude = -3.5;
region.span.longitudeDelta = 10.0f;
region.span.latitudeDelta = 10.0f;
[mapView setRegion:region animated:NO];
[self performSelector:#selector(zoomInToMyLocation)
withObject:nil
afterDelay:2]; //will zoom in after 1.5 seconds
}
-(void)zoomInToMyLocation
{
MKCoordinateRegion region = { {0.0, 0.0 }, { 0.0, 0.0 } };
region.center.latitude = 51.502729 ;
region.center.longitude = -0.071948;
region.span.longitudeDelta = 0.19f;
region.span.latitudeDelta = 0.19f;
[mapView setRegion:region animated:YES];
[mapView setDelegate:self];
[self performSelector:#selector(selectAnnotation)
withObject:nil
afterDelay:0.5]; //will zoom in after 0.5 seconds
}
-(void)selectAnnotation
{
DisplayMap *ann = [[DisplayMap alloc] init];
ann.title = #"Design Museum";
ann.subtitle = #"Camberwell, London";
ann.coordinate = region.center;
[mapView addAnnotation:ann];
}
Dont know if its correct, because the error is this line
ann.coordinate = region.center;

If you want to start with showing one region and then zoom in, you have to issue two or more setRegion calls because setRegion by itself doesn't let you control the starting region or the speed of the animation.
In viewDidLoad, set the initial region's span so the entire UK is visible (try deltas of 10.0 instead of 0.15). You could also set animated to NO for the initial region.
Then before the end of viewDidLoad, schedule the zoom-in to be executed a few seconds later:
- (void)viewDidLoad
{
...
[self performSelector:#selector(zoomInToMyLocation)
withObject:nil
afterDelay:5]; //will zoom in after 5 seconds
}
The zoomInToMyLocation method might look like this:
-(void)zoomInToMyLocation
{
MKCoordinateRegion region = { {0.0, 0.0 }, { 0.0, 0.0 } };
region.center.latitude = 51.502729 ;
region.center.longitude = -0.071948;
region.span.longitudeDelta = 0.15f;
region.span.latitudeDelta = 0.15f;
[mapView setRegion:region animated:YES];
}
One thing you might have to take care of when using performSelector is to cancel a pending call if the view is closed or deallocated before the call is scheduled to run. For example, if the user closes the view two seconds after loading it. Three seconds later, the scheduled method may still get called but will crash since the view is gone. To avoid this, cancel any pending performs in viewWillDisappear: or wherever appropriate:
[NSObject cancelPreviousPerformRequestsWithTarget:self];

MKCoordinateRegion region = { {0.0, 0.0 }, { 0.0, 0.0 } };
region.center.latitude = Your Latitude ;
region.center.longitude = Your Longitude;
region.span.longitudeDelta = 0.01f;
region.span.latitudeDelta = 0.01f;
[map setRegion:region animated:YES];
[map addAnnotation:ann];

Related

how to update an uniform vec2 when the mouse is moving,

I'm trying to update an uniform vec2 when the mouse is moving but i get an error: TypeError: Failed to execute 'uniform2fv' on 'WebGL2RenderingContext': Overload resolution failed.
for this I create a ref uniforms
const uniformRef = useRef({
uAlpha: 0,
uOffset: { x: 0.0, y: 0.0 },
uTexture: imageTexture,
});
then I created a useEffect to listen the event "mousemove"
useEffect(() => {
document.addEventListener("mousemove", onMouseMove);
return () => document.removeEventListener("mousemove", onMouseMove);
}, [onMouseMove]);
and finally i create the functiopn call on "mousemove"
const onMouseMove = useCallback((e: MouseEvent) => {
if (!planeRef.current || planeRef.current === undefined) return;
// mouse coordinate
let x = (e.clientX / window.innerWidth) * 2 - 1;
let y = -(e.clientY / window.innerHeight) * 2 + 1;
const position = new THREE.Vector3(x, y, 0);
// change position of the mesh
gsap.to(planeRef.current?.position, {
duration: 1,
x,
y,
ease: Power4.easeOut,
onUpdate: () => onPositionUpdate(position),
});
}, []);
// update the offset
const onPositionUpdate = (position: THREE.Vector3) => {
if (planeRef.current) {
let offset = planeRef.current.position
.clone()
.sub(position)
.multiplyScalar(-0.25);
gsap.to(uniformRef.current, {
uOffset: { x: offset.x, y: offset.y },
duration: 1,
ease: Power4.easeOut,
});
}
};
And this is the initialisation of my shader
const ColorShiftMaterial = shaderMaterial(
{
uTexture: new THREE.Texture(),
uOffset: new THREE.Vector2(0.0, 0.0),
uAlpha: 0,
},
...)
If you could help me on this or just give tips, it will help me a lot !
Thanks
I tried a lot of things, but every time i tried someting new, I still get the error, I was thinking it may be because i give to uOffset a vec3. So I change to a vec2 but even with this I still get the error.
First off, you should use react-three fiber when dealing with 3JS and react.
By naming convention you should name your uniforms like u_mouse (with an underscore).
In three fiber there is an onPointerMove that you can use to get your mouse movement.
For example:
<mesh onPointerMove={(e) => handleMove(e)}>
To pass your values to the shader, you need to have uniforms.
const uniforms = {
u_time: { value: 0.0 },
u_mouse: { value: { x: 0.0, y: 0.0 } },
u_resolution: { value: { x: 0, y: 0 } },
};
You can use a useEffect to get the values of the screensize and use these values in your resolution uniform. Then you can use these values in your shader to calculate the movement of the mouse regardless of the screen size.
I'm not sure how you have written your shader, but you can create a vec2 for the mouse and a vec2 for the resolution and do something like
vec2 v = u_mouse/u_resolution; // mouseposition / screensize
vec3 color = vec3(v.x, (v.y/v.x+0.2), v.y); // use the v-value for the x- and y-axis
gl_FragColor = vec4(color, 1.0); // add the new colors to your output
It all depends on what you want to happen and how you want it to look.

corrupt png when drawing a long canvass?

I have a long pdf around to 6 pages that I am generation from html, and I am getting the corrupt png image error any idea how to solve this ? it happens only on ios safari
html2canvas(
document.getElementById('pdfItems')!,
{
onclone: function (clonedDoc) {
// none ! hidden show !!
clonedDoc.getElementById('pdfItems')!.style.display = 'flex';
},
// // TO INCREASE QUALITY !!!!!!!!!!
scale: /*3*/ 2,
}
).then((canvas) => {
//document.body.appendChild(canvas); // if you want see your screenshot in body.
const imgData = canvas.toDataURL('image/png');
//A4
var imgWidth = 210;
// PAGE height in px !!!!!!!!!!!!!!!!!!
var pageHeight = 295;
// ???????? 2500 * 210 / 1140
var imgHeight = (canvas.height * imgWidth) / canvas.width;
// ????????
var heightLeft = imgHeight;
// A4
var doc = new jsPDF('p', 'mm', 'a4', true);
var topPadding = 10; // give some top padding to first page
doc.addImage(imgData, 'PNG', 0, topPadding, imgWidth, imgHeight);
// is the image bigger than the page ??? image hight is bigger than pdf page !!
heightLeft -= pageHeight;
while (heightLeft >= 0) {
topPadding = heightLeft - imgHeight; // Starting position to continue from
doc.addPage();
doc.addImage(imgData, 'PNG', 0, topPadding, imgWidth, imgHeight);
heightLeft -= pageHeight;
}
doc.save(`${fileName()}.pdf`);
setTimeout(() => _backdropContext.SetBackDrop(false), 500);
});
};

Waterflow simulation through a pipe using Three.js

I want to simulate water flow through the pipe using Three.js in my React application. As you can see in the below picture, I want to achieve three functionalities,
Draw a pipe
Simulate water based on % (0-100) - Now pipe filled with 70% of water.(user-defined)
Animate water flow using arrows moving from left to right - (left to right/ right to left) - user-defined
Something I tried was not working
A hollow cylinder (pipe), based on ExtrudeGeometry:
body{
overflow: hidden;
margin: 0;
}
<script type="module">
import * as THREE from "https://cdn.skypack.dev/three#0.133.1";
import {
OrbitControls
} from "https://cdn.skypack.dev/three#0.133.1/examples/jsm/controls/OrbitControls.js";
let scene = new THREE.Scene();
let camera = new THREE.PerspectiveCamera(60, innerWidth / innerHeight, 1, 1000);
camera.position.set(0, 0, 10);
camera.lookAt(scene.position);
let renderer = new THREE.WebGLRenderer({
antialias: true
});
renderer.setSize(innerWidth, innerHeight);
renderer.setClearColor(0x404040);
document.body.appendChild(renderer.domElement);
let controls = new OrbitControls(camera, renderer.domElement);
let light = new THREE.DirectionalLight(0xffffff, 1);
light.position.setScalar(1);
scene.add(light, new THREE.AmbientLight(0xffffff, 0.5));
let r = 1, R = 1.25;
// pipe
let pipeShape = new THREE.Shape();
pipeShape.absarc(0, 0, R, 0, Math.PI * 2);
pipeShape.holes.push(new THREE.Path().absarc(0, 0, r, 0, Math.PI * 2, true));
let pipeGeometry = new THREE.ExtrudeGeometry(pipeShape, {
curveSegments: 100,
depth: 10,
bevelEnabled: false
});
pipeGeometry.center();
let pipeMaterial = new THREE.MeshLambertMaterial({color: "silver"});
let pipe = new THREE.Mesh(pipeGeometry, pipeMaterial);
scene.add(pipe);
window.addEventListener("resize", onResize);
renderer.setAnimationLoop(_ => {
renderer.render(scene, camera);
})
function onResize(event) {
camera.aspect = innerWidth / innerHeight;
camera.updateProjectionMatrix();
renderer.setSize(innerWidth, innerHeight);
}
</script>
It looks like your cylinder Geometry is oversized compared to your scene.
Watch the following example :
https://codepen.io/freddy-turtle/pen/OJjVmvG?editors=1010
the cylinder appear in the whole screen with CylinderGeometry = 1,1,3 as first arguments.

Drag move flickers position of window

I'm implementing a Rust alternative to .NET's DragMove method however the result causes the application to flicker between two relative positions.
See screencast and sample project.
Code I'm using to perform the drag move:
let mut mouse_down = false;
let mut last_pos: Option<PhysicalPosition<f64>> = None;
event_loop.run(move |event, _, control_flow| match event {
Event::WindowEvent {
event: WindowEvent::CursorMoved {
position,
..
},
..
} => {
let gl_window = display.gl_window();
let window = gl_window.window();
if mouse_down {
if last_pos.is_some() {
let previous_pos = last_pos.unwrap();
let delta_x = previous_pos.x - position.x;
let delta_y = previous_pos.y - position.y;
window.set_outer_position(PhysicalPosition::new(position.x + delta_x, position.y + delta_y));
}
last_pos = Some(position);
}
}
Event::WindowEvent {
event: WindowEvent::MouseInput{
state,
button,
..
},
..
} => {
mouse_down = button == MouseButton::Left && state == ElementState::Pressed;
if !mouse_down {
last_pos = None;
}
}
_ => {}
});
CursorMoved reports
(x,y) coords in pixels relative to the top-left corner of the window.
When you're later using that position to set_outer_position, you are essentially reinterpreting window-relative coordinates as screen-relative.
You should instead apply the offset to the position returned from outer_position.
While that fixes the immediate problem, I'm not sure it is enough to account for the window movement. When you're handling the next CursorMoved event, the coordinates are still window-relative, but the window has since moved. That may produce artifacts all over.
A more robust solution would store the window's position when the drag-move operation starts, and offset that position by the accumulated deltas.

How to enable Vuforia's lighting estimation in native iOS when I use SceneKit and ARKit?

I am writing Scene Node in SCNScene for a VuforiaEAGLView.
fileprivate func createMyScene(with view: VuforiaEAGLView) -> SCNScene {
let scene = SCNScene(named: "Models.scnassets/sun/DAE_SUN.dae")
guard scene != nil else {
print("Scene can't be load")
return SCNScene()
}
boxMaterial.diffuse.contents = UIColor.lightGray
let omniLightNode = SCNNode()
omniLightNode.light = SCNLight()
omniLightNode.light?.type = .omni
omniLightNode.light?.color = UIColor(white: 0.75, alpha: 1.0)
omniLightNode.position = SCNVector3(x:50, y:50, z:50)
scene?.rootNode.addChildNode(omniLightNode)
let ambientLightNode = SCNNode()
ambientLightNode.light = SCNLight()
ambientLightNode.light?.type = .ambient
ambientLightNode.light?.color = UIColor(white: 0.87, alpha: 1.0)
scene?.rootNode.addChildNode(ambientLightNode)
let spotLightNode = SCNNode()
spotLightNode.light = SCNLight()
spotLightNode.light?.type = .spot
spotLightNode.light?.color = UIColor(white: 0.87, alpha: 1.0)
spotLightNode.position = SCNVector3(x:10, y:10, z:10)
scene?.rootNode.addChildNode(spotLightNode)
for node in (scene?.rootNode.childNodes)! {
if node.name == "sun" {
node.position = SCNVector3Make(0, 0, 0)
node.scale = SCNVector3Make(0.5, 0.5, 0.5)
}
}
return scene!
}
However, the light is not right. And, I also wanna turn on lighting estimation by ARKit. I saw Vuforia support says the new Vuforia SDK support ARKit lighting estimation. But, the supporter doesn't mention how to enable this feature.
Question: How to turn on the lighting estimation by ARKit in Vuforia?

Resources