access Alexa Session persistence from other service - alexa

I have a Alexa hosted skill. I am trying to find a way to access persistenceAdapter.S3PersistenceAdapter() from some other places ( e.g. angular 2.x). Can this be done, or I have to use other database to replace S3? If that is the case, which database is recommended?
I use some sample code to access S3 from alexa skill. I have no idea how attributesManager work, just copy and paste.
.withPersistenceAdapter(
new persistenceAdapter.S3PersistenceAdapter({bucketName:process.env.S3_PERSISTENCE_BUCKET})
)
and
const attributesManager = handlerInput.attributesManager;
const sessionAttributes = await attributesManager.getPersistentAttributes() || {};
const temperature = sessionAttributes.hasOwnProperty('temperature') ? sessionAttributes.temperature : 0;

S3 isn't a database - S3 is object storage. If a database is what you need, you could use DynamoDB instead. It sounds like it is - you wouldn't normally use S3 for this.
Anyway, you'll not be able to use the ASK SDK in an Angular or other (non Alexa skill) project. But, you could connect to S3 (or DynamoDB) using the AWS SDK.
https://github.com/aws/aws-sdk-js
You have 3x types of attributes for Alexa - request, session and persistent. I noticed your variable is named sessionAttributes but you're doing getPersistentAttributes.
Here's an example of how you'd use the withPersistentAdapter - https://www.talkingtocomputers.com/alexa-skills-kit-ask-sdk-v2#data-persistence
But here's an example if you use DynamoDB. It's simpler IMO:
module.exports.handler = Alexa.SkillBuilders.standard()
.addRequestHandlers(/* your handlers */)
.withTableName(/* your table name (string) */)
.withDynamoDbClient()
.lambda()
}
Then you could do something like (in an async function):
const att = await attributesManager.getPersistentAttributes()
const temperature = att.temperature ? att.temperature : 0
But of course, you need to save the attribute there first, if you want to access it. For example (in an async function):
const att = await attributesManager.getPersistentAttributes()
await attributesManager.setPersistentAttributes( { ...att, temperature: 10 }) // set the value
await attributesManager.savePersistentAttributes() // save it

You can use S3 buckets the syntax looks like this, they do have get,set and save like the persistence attributes as in the example above and more on this you can go here the Amazon Docs at the bottom.
const { S3PersistenceAdapter } = require('ask-sdk-s3-persistence-adapter');
const S3PersistenceAdapter = new S3PersistenceAdapter({ bucketName : 'FooBucket' })

Related

How to store data in this very simple React Native app?

I'm developing an app using React Native that allows you to create your own checklists and add items to them.
For example you'd have "Create Checklist", and inside that you'll have the option to "Add Item", "Delete Item" "Edit Item", basic CRUD methods etc.
It's going to be completely offline but I'm wondering what the best approach to storing this data locally would be.
Should I be using a DB such as firebase? I have read that it is overkill and to use something like Redux but I'm not sure if the latter will accomplish everything I need. As long as it's storing data which can be edited, and will save on the user's device (with minimal effort) it sounds good to me.
Would appreciate some input on this, thanks!
You could use AsyncStorage for persisting data locally on the user's phone. It is a simple persistent key-value-storage.
Each checklist is most likely an array of JS objects. The documentation provides an example on how to store objects.
const storeData = async (value) => {
try {
const jsonValue = JSON.stringify(value)
await AsyncStorage.setItem('#storage_Key', jsonValue)
} catch (e) {
// saving error
}
}
The value parameter is any JS object. We use JSON.stringify to create a JSON string. We use AsyncStorage.setItem in order to persist the data. The string #storage_Key is the key for the object. This could be any string.
We retrieve a persisted object as follows.
const getData = async () => {
try {
const jsonValue = await AsyncStorage.getItem('#storage_Key')
return jsonValue != null ? JSON.parse(jsonValue) : null;
} catch(e) {
// error reading value
}
}
Both examples are taken from the official documentation.
Keep in mind that this functionality should be used for persistence only. If the application is running, you should load the complete list, or parts of the list if it is very large, in some sort of application cache. The implementation for this functionality now heavily depends on how your current code looks like. If you have a plain view, then you could access the local storage in an effect and just store it in a local state.
function MySuperList() {
const [list, setList] = useState([]);
React.useEffect(() => {
// retrieve data using the above functionality and set the state
}, [])
// render list
return (...)
}
I would implement some sort of save button for this list. If it is pressed, then we persist the data in the local storage of the phone.

Upload images to Azure blob from front end (React)

The front end enables people to upload their photos, so i was sending the base64 to the server and working with it initially, but there are problems with firewall which blocks the request which contains base64. As an alternative solution I was trying to upload the image to azure blob get the file name and then send that to the server for processing where I generate a sas token for the blob validation and processing.
This works perfectly fine when I work locally and the front end connection works with #azure/storage-blob
and uploadBrowserData() when I send the arrayBuffer as the param
export const uploadSelfieToBlob = async arrayBuffer => {
try {
const blobURL = `https://${accountName}.blob.core.windows.net${sasString}`;
const blobServiceClient = new BlobServiceClient(blobURL, anonymousCredential);
const containerClient = blobServiceClient.getContainerClient(containerName);
let randomString = Math.random().toString(36).substring(7);
const blobName = `${randomString}_${new Date().getTime()}.jpg`;
const blockBlobClient = containerClient.getBlockBlobClient(blobName);
const uploadBlobResponse = await blockBlobClient.uploadBrowserData(arrayBuffer);
return { blobName, blobId: uploadBlobResponse.requestId };
} catch (error) {
console.log('error when uploading to blob', error);
throw new Error('Error Uploading the selfie to blob');
}
};
When I deploy this is not working, the front is deployed in the EastUs2 location and the local development location is different.
I thought the sasString generated for anonymous access had the timezone option so I generated 2 different one's one for local and one for hosted server with the same location selected.
Failed to send request to https://xxxx.blob.core.windows.net/contanainer-name/26pcie_1582087489288.jpg?sv=2019-02-02&ss=b&srt=c&sp=rwdlac&se=2023-09-11T07:57:29Z&st=2020-02-18T00:57:29Z&spr=https&sig=9IWhXo5i%2B951%2F8%2BTDqIY5MRXbumQasOnY4%2Bju%2BqF3gw%3D
What am I missing any lead would be helpful thanks
First, as mentioned in the comments there was an issue with the CORS Settings because of which you're getting the initial error.
AuthorizationResourceTypeMismatchThis
request is not authorized to perform this operation using this
resource type. RequestId:7ec96c83-101e-0001-4ef1-e63864000000
Time:2020-02-19T06:57:31.2867563Z
I looked up this error code here and then closely looked at your SAS URL.
One thing I noticed in your SAS URL is that you have set the signed resource type (srt) as c (container) and trying to upload the blob. If you look at the description of the kind of operations you can do using srt=c here, you will notice that blob related operations are not supported.
In order to perform blob related operations (like blob upload), you would need to set signed resource type value to o (for object).
Please regenerate your SAS Token and include signed resource type as object (you can also include container and/or service in there as well) and then your request should work. So essentially your srt in your SAS URL should be something like srt=o or srt=co or srt=sco.
I couldn't notice anything wrong with the code you mentioned about, but I have been using a different method to upload files to Azure Blog Storage using React, the method is exactly the same as in this blog article which works perfectly for me.
https://medium.com/#stuarttottle/upload-to-azure-blob-storage-with-react-34f37805fdfc

Is there a way to batch read firebase documents

I am making a mobile app using flutter with firebase as my backend.
I have a collection of user document that stores user information. one of the fields is an array of references (reference documents in another collection) which I want to use in an operation like batch that in that would then allow be to read all the documents.
I know batch only allows writes to the database, My second option would be Transaction, which requires writes after reads which I am trying to avoid.
Is there a way to read multiple documents in one operation without having to use Transaction?
Firestore doesn't offer a formal batch read API. As Frank mentions in his comment, there is a way to use IN to fetch multiple documents from a single collection using their IDs. However, all of the documents must be in the same collection, and you can't exceed 10 documents per query. You might as well just get() for each document individually, as the IN query has limitations, and isn't guaranteed to execute any faster than the individual gets. Neither solution is guaranteed to be "consistent", so any one of the documents fetched could be "more fresh" than the others at any given moment in time.
If you know the document IDs and the collection paths of the documents needed to be fetched, you could always use the getAll() method which is exposed in the firebase Admin SDK (at least for Node.js environments).
Then, for example, you could write an HTTPS Callable Function that would accept a list of absolute document paths and perform a "batch get" operation on them using the getAll() method.
e.g.
// Import firebase functionality
const functions = require('firebase-functions');
const admin = require('firebase-admin');
// Configure firebase app
admin.initializeApp(functions.config().firebase);
// HTTPS callable function
exports.getDocs = functions.https.onCall((data, context) => {
const docPathList = data.list; // e.g. ["users/Jkd94kdmdks", "users/8nkdjsld", etc...]
const firestore = admin.firestore();
var docList = [];
for (var i = 0; i <= docPathList.length - 1; i++) {
const docPath = docPathList[i];
const doc = firestore.doc(docPath);
docList.push(doc);
}
// Get all
return firestore.getAll(...docList)
.then(results => {
return { data : results.map(doc => doc.data()) };
})
.catch(err => {
return { error : err };
})
});
Not sure what the limit (if any) is for the number of documents you can fetch using getAll(), but I do know my application is able to fetch at least 50 documents per call successfully using this method.
Firestore has a REST API that allows you to do batch GETs with document paths that may be what you need.
See https://firebase.google.com/docs/firestore/reference/rest/v1beta1/projects.databases.documents/batchGet

How can I export static HTML pages from next.js when they need data from a third-party API?

I’m using next.js to build static HTML webpages.
One of my webpages needs data from a third-party API, which I’d like to fetch at build time and bake into the resulting HTML.
I don’t want this call to ever happen on the client, because:
CORS prevents the request from succeeding anyway
I would have to expose an API key on the client (no thank you)
I thought getInitialProps was the answer, because the fetched data is indeed baked in during the build/export process, but when I navigate away from the page and return from it, getInitialProps gets triggered on the client, breaking everything.
My current code in getInitialProps is something like:
static async getInitialProps(){
// Get Behance posts
const behanceEndpoint = `https://www.behance.net/v2/users/${process.env.BEHANCE_USERNAME}/projects?api_key=${process.env.BEHANCE_API_KEY}`
const behanceRes = await fetch(behanceEndpoint)
let behancePosts = await behanceRes.json()
// Return only the required number of posts
return {
behancePosts: behancePosts
}
}
Any advice or best practice on how to handle this? I know Gatsby.js does it out of the box.
one possibility would be, if you just want to execute this once on the server to check if the req parameter is present in getInitialProps: (Documentation)
req - HTTP request object (server only).
One dirty approach:
static async getInitialProps({ req }){
if (req) {
// only executed on server
// Get Behance posts
const behanceEndpoint = `https://www.behance.net/v2/users/${process.env.BEHANCE_USERNAME}/projects?api_key=${process.env.BEHANCE_API_KEY}`
const behanceRes = await fetch(behanceEndpoint)
let behancePosts = await behanceRes.json()
// Return only the required number of posts
return {
behancePosts: behancePosts
}
} else {
// client context
}
Hope this helps a little bit.

Ways to access firebase storage (photos) via web app

I'm confused as to the appropriate way to access a bunch of images stored in Firebase storage with a react redux firebase web app. In short, I'd love to get a walkthrough of, once a photo has been uploaded to firebase storage, how you'd go about linking it to a firebase db (like what exactly from the snapshot returned you'd store), then access it (if it's not just <img src={data.downloadURL} />), and also how you'd handle (if necessary) updating that link when the photo gets overwritten. If you can answer that, feel free to skip the rest of this...
Two options I came across are either
store the full URL in my firebase DB, or
store something less, like the path within the bucket, then call downloadURL() for every photo... which seems like a lot of unnecessary traffic, no?
My db structure at the moment is like so:
{
<someProjectId>: {
imgs: {
<someAutoGenId>: {
"name":"photo1.jpg",
"url":"https://<bucket, path, etc>token=<token>"
},
...
},
<otherProjectDetails>: "",
...
},
...
}
Going forward with that structure and the first idea listed, I ran into trouble when a photo was overwritten, so I would need to go through the list of images and remove the db record that matches the name (or find it and update its URL). I could do this (at most, there would be two refs with the old token that I would need to replace), but then I saw people doing it via option 2, though not necessarily with my exact situation.
The last thing I did see a few times, were similar questions with generic responses pointing to Cloud Functions, which I will look into right after posting, but I wasn't sure if that was overcomplicating things in my case, so I figured it couldn't hurt too much to ask. I initially saw/read about Cloud Functions and the fact that Firebase's db is "live," but wasn't sure if that played well in a React/Redux environment. Regardless, I'd appreciate any insight, and thank you.
In researching Cloud Functions, I realized that the use of Cloud Functions wasn't an entirely separate option, but rather a way to accomplish the first option I listed above (and probably the second as well). I really tried to make this clear, but I'm pretty confident I failed... so my apologies. Here's my (2-Part) working solution to syncing references in Firebase DB to Firebase Storage urls (in a React Redux Web App, though I think Part One should be applicable regardless):
PART ONE
Follow along here https://firebase.google.com/docs/functions/get-started to get cloud functions enabled.
The part of my database with the info I was storing relating to the images was at /projects/detail/{projectKey}/imgs and had this structure:
{
<autoGenKey1>: {
name: 'image1.jpg',
url: <longURLWithToken>
},
<moreAutoGenKeys>: {
...
}, ...}
My cloud function looked like this:
exports.updateURLToken = functions.database.ref(`/projects/detail/{projectKey}/imgs`)
.onWrite(event => {
const projectKey = event.params.projectKey
const newObjectSet = event.data.val()
const newKeys = Object.keys(newObjectSet)
const oldObjectSet = event.data.previous.val()
const oldKeys = Object.keys(oldObjectSet)
let newObjectKey = null
// If something was removed, none of this is necessary - return
if (oldKeys.length > newKeys.length) {
return null
}
for (let i = 0; i < newKeys.length; ++i) {// Looking for the new object -> will be missing in oldObjectSet
const key = newKeys[i]
if (oldKeys.indexOf(key) === -1) {// Found new object
newObjectKey = key
break
}
}
if (newObjectKey !== null) {// Checking if new object overwrote an existing object (same name)
const newObject = newObjectSet[newObjectKey]
let duplicateKey = null
for (let i = 0; i < oldKeys.length; ++i) {
const oldObject = oldObjectSet[oldKeys[i]]
if (newObject.name === oldObject.name) {// Duplicate found
duplicateKey = oldKeys[i]
break
}
}
if (duplicateKey !== null) {// Remove duplicate
return event.data.ref.child(duplicateKey).remove((error) => error ? 'Error removing duplicate project detail image' : true)
}
}
return null
})
After loading this function, it would run every time anything changed at that location (projects/detail/{projectKey}/imgs). So I uploaded the images, added a new object to my db with the name and url, then this would find the new object that was created, and if it had a duplicate name, that old object with the same name was removed from the db.
PART TWO
So now my database had the correct info, but unless I refreshed the page after every time images were uploaded, adding the new object to my database resulted (locally) in me having all the duplicate refs still, and this is where the realtime database came in to play.
Inside my container, I have:
function mapDispatchToProps (dispatch) {
syncProjectDetailImages(dispatch) // the relavant line -> imported from api.js
return bindActionCreators({
...projectsContentActionCreators,
...themeActionCreators,
...userActionCreators,
}, dispatch)
}
Then my api.js holds that syncProjectDetailImages function:
const SAVING_PROJECT_SUCCESS = 'SAVING_PROJECT_SUCCESS'
export function syncProjectDetailImages (dispatch) {
ref.child(`projects/detail`).on('child_changed', (snapshot) => {
dispatch(projectDetailImagesUpdated(snapshot.key, snapshot.val()))
})
}
function projectDetailImagesUpdated (key, updatedProject) {
return {
type: SAVING_PROJECT_SUCCESS,
group: 'detail',
key,
updatedProject
}
}
And finally, dispatch is figured out in my modules folder (I used the same function I would when saving any part of an updated project with redux - no new code was necessary)

Resources