I'm a little new to GraphQL and this question falls under "It cannot possibly be this hard. I have to be missing something."
I have a fairly standard GraphQL/Apollo/React application split into client and server. Everything is working well with the client making API calls and getting data back from the server. The client is even able to upload files to the server. However, I now need the server to stream back files saved on disk. That's it.
This is the "I have to be missing something" part. Everything I've seen in the docs and on Stackoverflow is some variation of pushing the file back from the server and through the GraphQL query as a base64-endocded string and then doing some very hacky stuff on the client, often involving a hidden href tag and a simulated click. To this I say, "What???"
Seriously. There are files on disk that the server knows how to find. The client needs to show a button to the user that they can click on to download the file. That's it. Every other framework in every other language has an easy way to do this. Can someone show me the incredibly simple thing that I'm missing here?
Thanks,
Alex
What you're missing is that GraphQL really shouldn't be used for this purpose.
While GraphQL itself does not specify a specific format for serializing responses, the de facto format is JSON. And the only way to get the file inside a JSON response is if it's serialized as a string.
If you want to serve static content, you should set up Nginx, Apache or another web server that's been built with that in mind. Alternatively, if you're already using some existing web server library like Express, it most likely has tools for serving static content as well.
Just because you have a GraphQL endpoint does not necessarily mean it should be the only way your client communicates with your backend.
I'm trying to layer WMS from NOAA in Tableau. How do I know which URL to use from the provided file?
Provided File
Background Info
Your first URL, the one with the WMS request to GetCapabilities has the right form (although you can usually optionally cutoff the URL right before the ? character) Tableau will add the GetCapabilities part of the request when needed if you don’t.
The problem is that it looks like NOAA’s server is not responding properly to that WMS request. You should see a response in XML format describing the server’s capabilities, layers etc
So I’d ask the people that own the server what is going on.
Please presume that I do not know anything about any of the things I will be mentioning because I really do not.
Most OpenData sites have the possibility of exporting the presented file either in for example .csv or .json formats (Example). They also always have an API tab (Example API).
I presume using the API would mean that if the data is updated you would receive the change whereas exporting it as .csv would mean the content will not be changed anymore.
My questions is: how does one use this API code to display the same table one would get when exporting a .csv file.
Would you use a database to extract this information? What kind of database and how do you link the API to the database?
I presume using the API would mean that if the data is updated you
would receive the change whereas exporting it as .csv would mean the
content will not be changed anymore.
You are correct in the sense that, if you download the csv to your computer, that csv file won't be updated any more.
An API is something you would call - in this case, you can call the API, saying "Hey, do you have the latest data on xxx?", and you will be given back the latest information about what you have asked. This does not mean though, that this site will notify you when there's a new update - you will have to keep calling the API (every hour, every day etc) to see if there are any changes.
My questions is: how does one use this API code to display the same
table one would get when exporting a .csv file.
You would:
Call the API from a server code, or a cloud service
Let the server code or cloud service decipher (or "Parse") the response
Use the deciphered response to create a table made out of HTML, or to place it into a database
Would you use a database to extract this information? What kind of
database and how do you link the API to the database?
You wouldn't necessarily need a database to extract information, although a database would be nice to place the final data inside.
You would first need some sort of way to "call the REST API". There are many ways to do this - using Shell Script, using Python, using Excel VBA etc.
I understand this is hard to visualize, so here is an example of step 1, where you can retrieve information.
Try placing in the below URL (taken from the site you showed us) in your address bar of your Chrome browser, and hit enter
http://opendata.brussels.be/api/records/1.0/search/?dataset=associations-clubs-sportifs
See how it gives back a lot of text with many brackets and commas? You've basically asked the site to give you some data, and this is the response they gave back (different browsers work differently - IE asks you to download the response as a .json file). You've basically called an API.
To see this data more cleanly, open your developer tools of your Chrome browser, and enter the following JavaScript code
var url = 'http://opendata.brussels.be/api/records/1.0/search/?dataset=associations-clubs-sportifs';
var xhr = new XMLHttpRequest();
xhr.open('GET', url);
xhr.setRequestHeader('X-Requested-With', 'XMLHttpRequest');
xhr.onload = function() {
if (xhr.status === 200) {
// success
console.log(JSON.parse(xhr.responseText));
} else {
// error
console.log(JSON.parse(xhr.responseText));
}
};
xhr.send();
When you hit enter, a response will come back, stating "Object". If you click through the arrows, you can see this is a cleaner version of the data we just saw - more human readable.
In this case, I used JavaScript to retrieve the data, but you can use whatever code you want. You could proceed to use JavaScript to decipher the data, manipulate it, and push it into a database.
kintone is an online cloud database where you can customize it to run JavaScript codes, and have it store the data in their database, so you'll have the data stored online like in the below image. This is just one example of a database you can use.
There are other cloud services which allow you to connect API end points of different services with each other, like IFTTT and Zapier, but I'm not sure if they connect with open data.
The page you linked to shows that the API returns values as a JSON object. To access the data you can just send an appropriate http request and the response will be the requested data as a JSON. You can send requests like that over your browser if you want to.
Most languages allow JSON objects to be manipulated pro grammatically if you need to do work on the data.
Restful APIs publish model is "request and publish". Wen you request data via an API endpoint, you would receive response strings in JSON objects, CSV tables or XML.
The publisher, in this case Opendata.brussel.be would update their database on regular basis and publish the results via an API endpoint.
If you want to download the table as a relational data table in a CSV file, you'd need to parse the JSON objects into relational tables. This can be tricky since each JSON response string can vary in their paths.
There're several ways to do it. You can either write scripts to flatten the JSON objects or use a tool to parse and flatten the objects for you.
I use a tool called Acho to turn API endpoints into CSV files. It would parse almost all API endpoints through the parameters and even configure for multiple requests, such as iterative and recursive requests.
Acho API parser
I would like to create a java program that would get only the body/message of the mail, the mails are in .eml format and the mails are placed locally in my machine. The problem is when I try to look for ways to do this I'm still lead to use the Folder class which requires an active mail server. Now, my question is, is there any way I can extract the message part of an e-mail using the Javamail API without the need to connect to a server? Because doing the extraction of the message part of an e-mail without including its headers is nearly impossible with filestream.
Thanks for the help.
Cheers!
You can use a "local store provider" for JavaMail, but perhaps the simplest approach for you is to just use the MimeMessage that takes an InputStream. Give it a (buffered) FileInputStream and it will parse the message for you.
I'm working on a side project right now for an email client. I'm using a library to handle the retrieval of the messages from the server. However, I have a question on caching.
I don't want to fetch the entire list of headers everytime I load the client. Ideally, what I'd like to do is cache them and then update the list with what is on the server.
What's the best way to go about this? Should I store all the header information (including the server's message ID #) in a database, load the headers from that DB. Then as a background task sync up with the server...
Or is there a better way?
Look at the webmail sample of this open source project that use local caching:
http://mailsystem.codeplex.com/
If I remember well, he used a combination of local RFC822 plain text email storing with the message id as the filename and an index file with high level data.
Maybe the message itself where zipped to save disc space.
That's just a sample for the library, so don't expect code art there, but that's a start.