What design is used to store file objects that gets loaded from a website. For instance if I have a website that accepts documents or images. So
Use Case 1.
Users logs in and selects a MS word file on his machine and uploads to the website.
Use Case 2.
User logs in and selects a image on his machine and uploads to the website.
How do I store these file objects in the database
The first step is just getting the file from the AngularJS application to the server. This page talks about sending request to the server from the client and should get you started.
Once you have done that, (assuming you are using PHP) you will need to save the resulting file to the database. This post will get you started with saving files to PostgreSQL, but the details will end up being very specific to your situation:
If you have more questions after reading through those resources please add specific details about your setup to your question.
Related
Currently trying to allow users to click a button in a react application to open a .docx document in their local version of word.
Have a C# rest endpoint setup that will return a document as a FileStreamResult if you pass it a document id.
Wondering if anyone has accomplished this before as I was eventually hoping that i could pass an endpoint in to the save parameter so once saved it would upload back to the server.
Tried running the ms-word command however it comes up with "this action couldn't be performed because office doesn't recognize the command it was given". ms-word:ofe|u|<>
Thanks in advance.
Im writing a single-page-web-app (angularJs) and a server back-end (node.js). The communication between them is done via REST.
Currently im trying to implement the following scenario:
Upload big files from browser to S3 public bucket.
Copy uploaded file to private bucket on S3
Transcode uploaded file to HTML 5 compatible format (AWS Elastic Transcoder)
Store Meta-Object about the file in DB to access later
I'm racking my brains to get a well working design of the communication/ data-workflow between server and client, but always got stuck at the following questions?
Store file meta-object at the end or at the beginning of the process. If it is at the beginning, i have to store and handle some state information?
Who should start copying uploaded files to private bucket. Server or client? If it is the server, how can the client get informed about the job succeeded?
Who starts the transcoding process? If it is the server, how can the client get informed about the job succeeded?
How would you do this?
there is a pretty good tutorial which describes the use case you are planning to implement: http://www.bitcodin.com/blog/2015/02/create-mpeg-dash-hls-content-for-amazon-s3-and-cloudfront/
If your transcoding system has a RESTfull API (like bitcodin which is used in this tutorial, or any other service) you can do your application also client-side and use the API calls to get the state of your transcodings, etc. However, using the API you can do the same also server-side, whatever fits better for you.
I personally would store the metadata infos at the beginning of the process, as this is the point of time where you generate the "asset" in your database/CMS/etc.
Is it possible to use a local database file with html5 without using a server. I would like to create a small application that depends on information from a small database. I do not want to host a server just to pull information. Is it possible to create a database file and pull information from the local files ?
Depends on the following:
The type of application you want to build:
Normal website with some data being pulled from a local storage;
Special purpose hosted website / application with data generated by the user;
Special purpose local application with a dedicated platform (a particular browser) and with access to the browser's non-web API -- in order to access the browser's own persistent storage methods (file storage, SQLite etc.);
Special purpose local application with a dedicated environment -- in order to deploy the application with a local web server and database;
Available options:
Indexed DB
Web Storage
XML files used for storing data and XSLT stylesheets for translating the data into HTML;
Indexed DB and Web Storage ar available in some browsers but you need to make sure the targeted browsers have it. Their features aren't quite as complete and flexible as SQL RDBMSs but they may fit the bill if your application doesn't need all that flexibility.
XML files can contain the data you want to be shown to the user and they can be updated manually (not by the user) or dynamically (by a server script).
For dynamic updating the content of the XML is kept in JavaScript and manipulated / altered (using the XML DOM) and when the session is over the XML content is sent to the server to entirely replace the previous XML file. This works OK if the individual users have a file each and they never write to each other's files.
Reading local files:
Normal file access is prohibited (for security reasons) to all local (JavaScript) code, which means that "having" a file locally implies either downloading it from a known source (a server) or asking the user to offer access to a local file.
Asking the user to offer access to a local file which implies offering the user a "file input" -- like for uploads but without actually uploading the file.
After a file has been selected using FileAPI to read that file should be fairly simple.
This workflow would involve the user "giving" you the database on every page refresh -- but since it's a one page thing it would mean giving you the data on every session as long as your script does not refresh the page.
You can use localstorage but you can run a server from your own computer. You can use Wamp or Xampp. Which use Apache and mysql.
What i'm looking for is a little more robust than a cookie. I am making a web application for a friend that will be 1 page, and have a list of names on the page. The person wants to be able to add names to the list, however they do not want to use a web server. Just want the files locally on a computer so a folder called test-app , with index.html, and possibly a database file that can be stored in the web browser or a way to save information to the web browser for repeated use.
I am building a website (probably in Wordpress) which takes data from a number of different sources for display on various pages.
The sources:
A Twitter feed
A Flickr feed
A database on a remote server
A local database
From each source I will mainly retrieve
A short string, e.g. for Twitter, the Tweet, and from the local database the title of a blog page.
An associated image, if one exists
A link identifying the content at its source
My question is:
What is the best way to a) store the data and b) retrieve the data
My thinking is:
i) Write a script that is run every 2 or so minutes on a cron job
ii) the script retrieves data from all sources and stores it in the local database
iii) application code can then retrieve all data from the one source, the local database
This should make application code easier to manage - we only ever draw data from one source in application code - and that's the main appeal. But is it overkill for a relatively small site?
I would recommend putting the twitter feed and flickr feed in JavaScript. Both flickr and twitter have REST APIs. By putting it on the client you free up resources on your server, create less complexity, your users won't be waiting around for your server to fetch the data, and you can let twitter and flickr cache the data for you.
This assumes you know JavaScript. Once you get past JavaScript quirks, it's not a bad language. Give Jquery a try. JQuery Twitter plugin Flickery JQuery plugin. There are others, that's just the first results from Google.
As for your data on the local server and remote server, that will depend more on the data that is being fetched. I would go with whatever you can develop the fastest and gives acceptable results. If that means making a REST call from server to sever, then go for it. IF the remote server is slow to respond, I would go the AJAX REST API method.
And for the local database, you are going to have to write server side code for that, so I would do that inside the Wordpress "framework".
Hope that helps.
I'm working on a side project right now for an email client. I'm using a library to handle the retrieval of the messages from the server. However, I have a question on caching.
I don't want to fetch the entire list of headers everytime I load the client. Ideally, what I'd like to do is cache them and then update the list with what is on the server.
What's the best way to go about this? Should I store all the header information (including the server's message ID #) in a database, load the headers from that DB. Then as a background task sync up with the server...
Or is there a better way?
Look at the webmail sample of this open source project that use local caching:
http://mailsystem.codeplex.com/
If I remember well, he used a combination of local RFC822 plain text email storing with the message id as the filename and an index file with high level data.
Maybe the message itself where zipped to save disc space.
That's just a sample for the library, so don't expect code art there, but that's a start.