I'm busy developing a location-based app and making use of expo, is there anyone who knows how to track user location on the map and tell if they reached a specific destination on expo project similar to GPS.
<MapViewDirections
origin={userContext.userLocation}
destination={userContext.userLocation}
apikey={googleMapApi}
mode='DRIVING'
language='en's
strokeWidth={8}
strokeColor="#2A9D8F"
optimizeWaypoints={true}
resetOnChange={false}
precision={"low"}
/>
If you are using expo you should use watchPositionAsync from expo-location api
https://docs.expo.io/versions/latest/sdk/location/#locationwatchpositionasyncoptions-callback
You should track here the changes and then put the coordinates in the map.
Usage
async componentWillMount() {
const { status } = await Permissions.askAsync(Permissions.LOCATION);
if (status === 'granted') {
this._getLocationAsync();
} else {
this.setState({ error: 'Locations services needed' });
}
}
componentWillUnmount() {
this.location.remove(callback...);
}
_getLocationAsync = async () => {
this.location = await Location.watchPositionAsync(
{
enableHighAccuracy: true,
distanceInterval: 1,
timeInterval: 10000
},
newLocation => {
let coords = newLocation.coords;
// this.props.getMyLocation sets my reducer state my_location
this.props.getMyLocation({
latitude: parseFloat(coords.latitude),
longitude: parseFloat(coords.longitude)
});
},
error => console.log(error)
);
return location;
};
Related
I'm using Google Maps Directions API to draw routes on a map. It does what I want on the first call of DirectionsRenderer.setDirections(response), but on the second call, it persists the previous route and uses the new one on top of it. How can I clear the previous route?
Code:
export async function testRouteCalculation(
directionsService: google.maps.DirectionsService,
directionsRenderer: google.maps.DirectionsRenderer,
withWaypoints: boolean,
numWaypointsToInclude: number
) {
let request: google.maps.DirectionsRequest = {
origin: testOrigin,
destination: testDestination,
travelMode: google.maps.TravelMode["DRIVING"],
unitSystem: google.maps.UnitSystem.METRIC,
provideRouteAlternatives: false,
// region is specified for region biasing
region: "za",
waypoints: [],
};
if (withWaypoints) {
for (let i = 0; i < numWaypointsToInclude; i++) {
request!.waypoints!.push(testWaypoints[i]);
}
}
try {
const response = await directionsService.route(request);
return response;
} catch (err) {
throw err;
}
The map component:
const Map = () => {
const ref = React.useRef<HTMLDivElement>(null);
const [map, setMap] = React.useState<google.maps.Map>();
const [directionsRenderer, setDirectionsRenderer] =
React.useState<google.maps.DirectionsRenderer>();
const [directionsService, setDirectionsService] =
React.useState<google.maps.DirectionsService>();
React.useEffect(() => {
let newMap = null;
if (ref.current && !map) {
newMap = new window.google.maps.Map(ref.current, {
center: capeTownCoordinates,
zoom: 13,
streetViewControl: false,
mapTypeControl: false,
});
setMap(newMap);
}
const newDirectionsRenderer = new google.maps.DirectionsRenderer();
newDirectionsRenderer.setMap(newMap);
setDirectionsRenderer(newDirectionsRenderer);
setDirectionsService(new google.maps.DirectionsService());
}, [ref, map]);
if (map && directionsRenderer && !directionsRenderer.getMap()) {
directionsRenderer.setMap(map);
}
const handleClick = async () => {
if (directionsRenderer && directionsService) {
try {
const response = await testRouteCalculation(
directionsService,
directionsRenderer,
true,
2
);
directionsRenderer.setDirections(response);
} catch (err) {
console.log(err);
}
} else {
console.log("no directionsRenderer or directionsService object");
}
};
return (
<>
<div id="map" style={{ height: "300px", width: "400px" }} ref={ref}></div>
<button onClick={handleClick} className={styles["floating-button"]}>
Get route
</button>
</>
);
};
I searched up and saw proposed solutions like directionsRenderer.setDirections(null) or directionsRenderer.setMap(null) before setting the new directions, and a couple of others, but none of them worked for me. I would think that .setDirections() would overwrite previous routes, but it seems that the routes drawn on the map and the directions stored in the directionRenderer are decoupled.
I found that calling directionsRenderer({routes: []}) achieved what I was looking for.
My component works fine and I can see it is working properly with the maps being formed in UI and markers coming up properly, however I am still get this error for the challenge :
Challenge Not yet complete... here's what's wrong:
We can't find the correct settings for the createMapMarkers() function in the component boatsNearMe JavaScript file. Make sure the component was created according to the requirements, including the right mapMarkers, title, Latitude (Geolocation__Latitude__s), Longitude (Geolocation__Longitude__s), the correct constants, stopping the loading spinner, and using the proper case-sensitivity and consistent quotation.
This is the Challenge 7 Statement as per the trailhead:
"Build the component boatsNearMe and display boats near you
Create the component boatsNearMe and show boats that are near the user, using the browser location and boat type. Display them on a map, always with the consent of the user (and only while the page is open)."
https://trailhead.salesforce.com/en/content/learn/superbadges/superbadge_lwc_specialist
Here is my boatsNearMe LWC code
import { LightningElement, wire, api } from 'lwc';
import getBoatsByLocation from '#salesforce/apex/BoatDataService.getBoatsByLocation';
import { ShowToastEvent } from 'lightning/platformShowToastEvent';
const LABEL_YOU_ARE_HERE = 'You are here!';
const ICON_STANDARD_USER = 'standard:user';
const ERROR_TITLE = 'Error loading Boats Near Me';
const ERROR_VARIANT = 'error';
export default class BoatsNearMe extends LightningElement {
#api boatTypeId;
mapMarkers = [];
isLoading = true;
isRendered = false;
latitude;
longitude;
#wire(getBoatsByLocation, { latitude: '$latitude', longitude: '$longitude', boatTypeId: '$boatTypeId' })
wiredBoatsJSON({ error, data }) {
if (data) {
this.isLoading = true;
this.createMapMarkers(JSON.parse(data));
} else if (error) {
this.dispatchEvent(
new ShowToastEvent({
title: ERROR_TITLE,
message: error.body.message,
variant: ERROR_VARIANT
})
);
// this.isLoading = false;
}
}
getLocationFromBrowser() {
navigator.geolocation.getCurrentPosition(
position => {
this.latitude = position.coords.latitude;
this.longitude = position.coords.longitude;
},
(error) => {}
);
}
createMapMarkers(boatData) {
const newMarkers = boatData.map((boat) => {
return {
location: {
Latitude: boat.Geolocation__Latitude__s,
Longitude: boat.Geolocation__Latitude__s
},
title: boat.Name,
};
});
newMarkers.unshift({
location: {
Latitude: this.latitude,
Longitude: this.longitude
},
title: LABEL_YOU_ARE_HERE,
icon: ICON_STANDARD_USER
});
this.mapMarkers = newMarkers;
this.isLoading = false;
}
renderedCallback() {
if (this.isRendered == false) {
this.getLocationFromBrowser();
}
this.isRendered = true;
}
}
I think you have Geolocation__Latitude__s twice in this part:
createMapMarkers(boatData) {
const newMarkers = boatData.map((boat) => {
return {
location: {
Latitude: boat.Geolocation__Latitude__s,
Longitude: boat.Geolocation__Latitude__s // <<-- should be Longitude
},
title: boat.Name,
};
});
I am working on a group project where we have to make a weather app. I am doing the details section, which displays values such as temperature, humidity, chance of rain, etc. from an API. I am using two different APIs in this project as OpenWeatherMap does not have some of the data I need. A different section of our app is getting the location and it is passed down to my section, where it is put into the OpenWeatherMap URL without issue.
constructor(props){
super(props);
}
componentDidMount() {
if(this.props.location) {
this.fetchWeatherData1();
this.fetchWeatherData2();
}
}
componentDidUpdate(prevProps) {
if(this.props.location != prevProps.location) {
this.fetchWeatherData1();
this.fetchWeatherData2();
}
}
fetchWeatherData1 = () => {
let urlOWM = "http://api.openweathermap.org/data/2.5/weather?q=" + this.props.location + "&units=metric&APPID=" + API_KEY_OWM;
$.ajax({
url: urlOWM,
dataType: "jsonp",
success : this.parseFirstResponse,
error : function(req, err){ console.log('API call failed ' + err); }
})
}
parseFirstResponse = (parsed_json) => {
var feelsLike = this.formatTemp(parsed_json['main']['feels_like']);
var humidity = this.formatPercentage(parsed_json['main']['humidity']);
var wind = this.formatSpeed(parsed_json['wind']['speed']);
var visib = this.formatVis(parsed_json['visibility']);
var cloud = this.formatPercentage(parsed_json['clouds']['all']);
var lat = this.formatString(parsed_json['coord']['lat']);
var long = this.formatString(parsed_json['coord']['lon']);
// set states for fields so they could be rendered later on
this.setState({
feelsLike: feelsLike,
humidity: humidity,
windSpeed: wind,
visibility: visib,
cloudCover: cloud,
latitude: lat,
longitude: long
});
}
In parseFirstResponse(), I store the latitude and longitude values for the location. I have to do this because the second API (DarkSky) URL can only take the coordinates for some reason. Here is the code below, where I am placing the state values for latitude and longitude into the URL:
fetchWeatherData2 = () => {
let urlDS = "https://api.darksky.net/forecast/" + API_KEY_DS + "/" + this.state.latitude + "," + this.state.longitude + "?exclude=minutely,hourly,daily,alerts,flags";
$.ajax({
url: urlDS,
dataType: "jsonp",
success : this.parseSecondResponse,
error : function(req, err){ console.log('API call failed ' + err); }
})
}
parseSecondResponse = (parsed_json) => {
var precipChance = parsed_json['currently']['precipProbability'];
var precipType = "";
if (precipChance == 0.0) {
precipType = "Precipitation";
}
else {
precipType = this.capitalize(parsed_json['currently']['precipType']);
}
precipChance = this.formatDecimaltoPercentage(precipChance);
var uv = parsed_json['currently']['uvIndex'];
var dew = this.formatTemp(this.fToC(parsed_json['currently']['dewPoint']));
// set states for fields so they could be rendered later on
this.setState({
precipChance: precipChance,
precipType: precipType,
uvIndex: uv,
dewPoint: dew
});
}
When I run this code and put a location in for the first time, I get an error in the console that says "Failed to load resource: the server responded with a status of 400 ()" and the URL looks like this: https://api.darksky.net/forecast/767ed401d519be925156b6c885fce737/undefined,undefined?exclude=minutely,hourly,daily,alerts,flags&callback=jQuery34106961395668750288_1585829837010&_=1585829837011
When I put a second, different location in, however, without refreshing the page, the URL and API call works without issue.
The coordinates of the location are supposed to be where the words "undefined, undefined" are. I have tried to console.log() the latitude and longitude values in my parseSecondResponse function and gotten the right values. I think this is a synchronization issue, but I'm not too sure.
Putting the fetchWeatherData() functions in the explicit ordering of 1 then 2 in my componentDidMount() function does not seem to help. I read about using Promises but I am not very familiar with React, so I am unsure how to implement them/if they will fix this issue.
I made small changes:
use fetch;
remove fetchWeatherData2 from componentDidMount;
use componentDidUpdate;
import React from 'react';
class Component extends React.Component {
constructor(props) {
super(props);
this.state = {
latitude: null,
longitude: null,
};
}
componentDidMount() {
const { location } = this.props;
if (location) {
this.fetchWeatherData1(location);
//fetchWeatherData2() your cannot call this because latitude and longitude === null
}
}
componentDidUpdate(prevProps, prevState) {
const { location: prevLocation } = prevState;
const { location: currLocation } = this.props;
const {
latitude: prevLatitude,
longitude: prevLongitude,
} = prevState;
const {
latitude: currLatitude,
longitude: currLongitude,
} = this.state;
if (prevLocation !== currLocation) {
this.fetchWeatherData1(currLocation);
}
if (prevLatitude !== currLatitude || prevLongitude !== currLongitude) {
this.fetchWeatherData2(currLatitude, currLongitude);
}
}
fetchWeatherData1(location) {
fetch(http://api.openweathermap.org/data/2.5/weather?q=${location}&units=metric&APPID=${API_KEY_OWM}`)
.then(res => res.json())
.then((data) => {
// parse you data here
this.setState(prevState => ({
latitude: data.coord.lat,
longitude: data.coord.long,
}));
})
.catch(({ message }) => {
console.error(`API call failed: ${message}`);
});
}
fetchWeatherData2(latitude, longitude) {
fetch(https://api.darksky.net/forecast/${API_KEY_DS}/${latitude},${longitude}?exclude=minutely,hourly,daily,alerts,flags`)
.then(res => res.json())
.then((data) => {
// parse you data here
})
.catch(({ message }) => {
console.error(`API call failed: ${message}`);
});
}
}
I'm new to react-native im trying to preload 10 images at the start of the app I followed expo documentation but
I want to cache images from an external file but it gives me an error [Un Handeled Promise Rejection]
here is my entries.js
export const ENTRIES1 = [
{
title: 'Makeup Artists',
illustration: require('../assets/img/makeup.png')
},
{
title: 'Photographers',
illustration: require('../assets/img/Photographers.png')
},
{
title: 'Wedding Planners',
illustration: require('../assets/img/weddingPlanner.jpg')
},
{
title: 'Wedding Halls',
illustration: require('../assets/img/wedding-Hall.png')
},
{
title: 'Laser & Beauty Centers',
illustration: require('../assets/img/laser.png')
},
]
loadingScreen.js
async componentDidMount() { //Preload Fonts
await Asset.loadAsync(ENTRIES1.illustration),
await Font.loadAsync({
'Roboto': require('../../node_modules/native-base/Fonts/Roboto.ttf'),
'Roboto_medium': require('../../node_modules/native-base/Fonts/Roboto_medium.ttf'),
...Ionicons.font,
});
this.checkIfLoggedIn();
}
what am i doing wrong ? Thanks
Try this :)
function cacheImages(images) {
return images.map(image => {
if (typeof image.illustration === 'string') {
return Image.prefetch(image.illustration);
} else {
return Asset.fromModule(image.illustration).downloadAsync();
}
});
}
async componentDidMount() {
await Asset.cacheImages(ENTRIES1),
await Font.loadAsync({
'Roboto': require('../../node_modules/native-base/Fonts/Roboto.ttf'),
'Roboto_medium': require('../../node_modules/native-base/Fonts/Roboto_medium.ttf'),
...Ionicons.font,
});
this.checkIfLoggedIn();
}
This is the sample demonstration of what I'm intended to do.
If anyone has any idea about this fix to make it work or any new logic please do share.
This demonstration is implemented by using mediaStream API and
using react-webcam library which actually gives option to manage the camera view with the help of the props named videoConstraints={facingMode: 'user' or 'environment'} but it doesn't seems to be working.
when I click the camera switch ICON screen just hangs and nothing shows and also sometime it is working unexpectedly So ultimately I had to jumps to this native API solution which shows the code right below.
with all regards thanks in anticipation.
start() {
if (window.stream) {
console.log('found stream and clearing that', window.stream)
window.stream.getTracks().forEach(function(track) {
track.stop()
})
}
const constraints = {
video: true,
audio: false
}
return navigator.mediaDevices
.getUserMedia(constraints)
.then(this.gotStream)
.then(this.gotDevices)
.catch(this.handleError);
}
gotStream(stream) {
window.stream = stream // make stream available to console
// video.srcObject = stream;
// Refresh button list in case labels have become available
console.log('enumerating media devices ')
return navigator.mediaDevices.enumerateDevices()
}
gotDevices(mediaDevices) {
const { availableVideoInputs, videoConstraints } = this.state
mediaDevices.forEach(mediaDevice => {
// console.log(mediaDevice)
if (mediaDevice.kind === 'videoinput') {
console.log('found new video input ', mediaDevice)
availableVideoInputs.push({
deviceId: mediaDevice.deviceId,
label: mediaDevice.label
})
// availableVideoInputs.push('mediaDevice.deviceId.availableVideoInputs.push(mediaDevice.deviceId)')
}
})
console.log('aggregated availableVideoInputs new ', availableVideoInputs)
if (availableVideoInputs.length > 0) {
// there are accessible webcam
// setting first device as default to open
const tempVideoConstraint = {...videoConstraints}
if (availableVideoInputs[0].deviceId) {
console.log('availableVideoInputs[0] = ', availableVideoInputs[0])
tempVideoConstraint.deviceId = availableVideoInputs[0].deviceId
}
// console.log('putting tempVideoConstraint.facingMode ', tempVideoConstraint)
// if (availableVideoInputs[0].label.includes('back')) {
// tempVideoConstraint.facingMode = { exact: 'environment'}
// } else {
// // it is now turn to set front active
// tempVideoConstraint.facingMode = 'user'
// }
console.log('setting new video constrains ', tempVideoConstraint)
// this.setState({
// availableVideoInputs,
// // activeVideoInputID: availableVideoInputs[0].deviceId,
// // videoConstraints: tempVideoConstraint
// })
this.updateAvailableVideoStream(availableVideoInputs)
return Promise.resolve('done setting updateAvailableVideoStream')
} else {
// no webcam is available or accessible
console.error('ERR::VIDEO_STREAM_NOT_AVAILABLE')
}
}
updateAvailableVideoStream(availableVideoInputs) {
this.setState({ availableVideoInputs })
}
componentDidMount() {
this.start()
.then(data => {
console.log('data ', data)
console.log('update state ', this.state)
this.setState({
videoConstraints: {
...this.state.videoConstraints,
facingMode: 'user'
}
})
})
}
handleCameraSwitch() {
const { videoConstraints, availableVideoInputs, activeVideoInputID } = this.state
console.log('current video constraints ', videoConstraints)
const tempVideoConstraint = { ...videoConstraints }
// now check if it is possible to change camera view
// means check for another webcam
console.log({ availableVideoInputs })
console.log({ activeVideoInputID })
console.log({ remainingVideoStreams })
if (availableVideoInputs.length === 1) {
// cannot change the webcam as there is only 1 webcam available
console.error('ERR - cannot change camera view [Available Video Inputs: 1]')
return
}
// now change the view to another camera
// get the current active video input device id and filter then from available video stream
const remainingVideoStreams = availableVideoInputs.filter(videoStream => videoStream.deviceId !== activeVideoInputID)
// now check if in remainingVideoStreams there is more than 1 stream available to switch
// if available then show the Stream Selection List to user
// else change the stream to remainingVideoStreams[0]
console.log({ availableVideoInputs })
console.log({ activeVideoInputID })
console.log({ remainingVideoStreams })
if (remainingVideoStreams && remainingVideoStreams.length === 1) {
tempVideoConstraint.deviceId = remainingVideoStreams[0].deviceId
console.log('new video constraints ', {...tempVideoConstraint})
console.log('webcam ref ', this.webCamRef.current)
// if (remainingVideoStreams[0].label.includes('back') || tempVideoConstraint.facingMode === 'user') {
// tempVideoConstraint.facingMode = { exact: 'environment' }
// } else {
// // it is now turn to set front active
// tempVideoConstraint.facingMode = 'user'
// }
console.log('new video constraints with facing mode', tempVideoConstraint)
// const constraints = {
// video: tempVideoConstraint
// }
// navigator.mediaDevices.getUserMedia(constraints)
// .then((stream) => {
// console.log('stream -> ', stream)
// })
// .catch((error) => {
// console.error('Some error occured while changing the camera view ', error)
// console.log(error)
// })
this.setState({ videoConstraints: tempVideoConstraint, activeVideoInputID: remainingVideoStreams[0].deviceId })
} else {
// show the remaining stream list to user
}
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/16.6.3/umd/react.production.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/16.6.3/umd/react-dom.production.min.js"></script>
This is the little variation of your Implementation.
But this will work exactly you've wished for.
Please see the below implementation for switching the camera front/back.
I have also added error validation like:
It will throw an error if there is no video stream available.
It will throw an error if there is only 1 video stream available when trying to access back camera.
Please do like and comment back if you have any other approach or want more clarification
componentDidMount() {
const gotDevices = (mediaDevices) =>
new Promise((resolve, reject) => {
const availableVideoInputs = []
mediaDevices.forEach(mediaDevice => {
if (mediaDevice.kind === 'videoinput') {
availableVideoInputs.push({
deviceId: mediaDevice.deviceId,
label: mediaDevice.label
})
}
})
if (availableVideoInputs.length > 0) {
resolve(availableVideoInputs)
} else {
reject(new Error('ERR::NO_MEDIA_TO_STREAM'))
}
})
navigator.mediaDevices.enumerateDevices().then(gotDevices)
.then((availableVideoInputs) => this.setState({ availableVideoInputs }))
.catch((err) => this.setState({ hasError: err }))
}
updateFileUploadView(newActiveView) {
this.setState({ activeFileUploadView: newActiveView })
const { hasError } = this.state
if (newActiveView === 'clickFromWebcam' && hasError) {
return console.error(hasError)
}
if (newActiveView === '') {
// means no view is active and clear the selected image
this.setState({ captureImageBase64: '', videoConstraints: defaultVideoConstraints })
}
}
changeCameraView() {
const { availableVideoInputs } = this.state
if (availableVideoInputs.length === 1) {
return console.error('ERR::AVAILABLE_MEDIA_STREAMS_IS_1')
}
this.setState({ resetCameraView: true })
setTimeout(() => {
const { videoConstraints: { facingMode } } = this.state
const newFacingMode = facingMode === 'user' ? { exact: 'environment' } : 'user'
this.setState({
videoConstraints: { facingMode: newFacingMode },
resetCameraView: false
})
}, 100)
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/16.6.3/umd/react.production.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/16.6.3/umd/react-dom.production.min.js"></script>
!resetCameraView ?
<Webcam
audio={false}
height='100%'
ref={this.webCamRef}
screenshotFormat="image/png"
minScreenshotWidth={screenShotWidth}
minScreenshotHeight={screenShotHeight}
screenshotQuality={1}
width='100%'
videoConstraints={videoConstraints}
/>
: 'Loading...'
As you can see this implementation is using react-webcam library
In componentDidMount you will first check for the available media stream of kind video input, then in other methods like changing cameraView i.e switching the camera to front/back.
I'm unmounting Webcam for 100ms only and then mounting it back with new videoConstraints either { facingMode: 'user' } or { facingMode: { exact: 'environment' } }
This approach will give your code a head start and you can play around the code and have fun.
Thank you!