I have created properties file in other directory of my computer. I want to read the data from that file and display it. So can anyone please suggest me, How can I achieve this.Thanks in advance.
From what i understand, you have a properties file on YOUR computer and you want to read that file from reactjs app. This is not possible as front end is not allowed to directly access user's hard disk. This would be a big security flaw. This is because the front end part runs on the client side.
Consider a situation where you have written code to read file from desktop. Then your app would be able to read the desktop files of ALL USERS who use that app. That's why you always see an upload button when you have to choose a file to read. The file is first sent to server side and then processed.
Since reactjs runs on client side, it is better to maintain a server and make an API call to it to fetch the data. Or you can hard code it in react app itself if it isn't sensitive info.
On front-end side - You can't and You should not be able to because it'd be a huge security risk. Do not try to solve it on the client side. Try to think about a back-end solution after uploading that particular file to the server.
On the other hand - why are you trying to keep a file, which is logically connected with the app - outside of the repository ?
Since its not clear what you are trying to achive,
Situation 1. You developed a react app for users, which is trying to read a user's file on his computer.
This is not possible as reactjs is a front-end library which can access the resources limited to browser only. You just can't read someone else's files.
Situation 2. The file is a part of you project which is in different directory.
So just put your file inside the your project directory, and since it is a properties file then this is how you can import it inside your project.
Related
I'm a little new to GraphQL and this question falls under "It cannot possibly be this hard. I have to be missing something."
I have a fairly standard GraphQL/Apollo/React application split into client and server. Everything is working well with the client making API calls and getting data back from the server. The client is even able to upload files to the server. However, I now need the server to stream back files saved on disk. That's it.
This is the "I have to be missing something" part. Everything I've seen in the docs and on Stackoverflow is some variation of pushing the file back from the server and through the GraphQL query as a base64-endocded string and then doing some very hacky stuff on the client, often involving a hidden href tag and a simulated click. To this I say, "What???"
Seriously. There are files on disk that the server knows how to find. The client needs to show a button to the user that they can click on to download the file. That's it. Every other framework in every other language has an easy way to do this. Can someone show me the incredibly simple thing that I'm missing here?
Thanks,
Alex
What you're missing is that GraphQL really shouldn't be used for this purpose.
While GraphQL itself does not specify a specific format for serializing responses, the de facto format is JSON. And the only way to get the file inside a JSON response is if it's serialized as a string.
If you want to serve static content, you should set up Nginx, Apache or another web server that's been built with that in mind. Alternatively, if you're already using some existing web server library like Express, it most likely has tools for serving static content as well.
Just because you have a GraphQL endpoint does not necessarily mean it should be the only way your client communicates with your backend.
I am trying to create a site for e-learning courses (zips html/css/js/media) to be uploaded to.
I am using go on google app engine with google cloud storage to store the zips and extracted courses.
I will explain the development dead ends I have encountered.
My first thought was to use the resumable upload functionality of cloud storage to send the zip file, then read it using go on app engine, unzip the files and write them back to cloud storage.
This took a while to read and understand the documentation and worked perfectly for my 2MB test zip. It failed when I tried it with a modest 67MB zip. I had encountered a hidden limitation when accessing cloud storage from app engine. No matter the client I used there was a 10MB/32MB limit.
I tried both the old and new libraries as well as blobstore.
I also looked into creating a custom oauth2 supporting client library using sockets but hit too many dead ends.
Giving up on that approach I thought even though it would mean more uploading, perhaps just extracting on the client (browser) side then uploading each file with it's own resumable upload would make the most sense. After exploring a few libraries I had extracting in browser working ready to upload.
I wrote my handler that created the datastore entry for the upload, selected a location for the upload and created all the upload urls.
When testing this I was finding that it would take a while to go through generating the long lists of files (anything over 100). I decided that it would make sense since I was using to to make the requests concurrently. I spend a day or two getting that working. After dealing with some CORS issues that weirdly did not show up earlier I had everything working.
Then I started getting errors when stress testing my approach with a large (500mb) zip/course. The uploads would fail and I discovered that when trying to send 300+ files to generate upload urls I was getting the following error
Post http://localhost:62394: dial tcp [::1]:62394: connectex: No connection could be made because the target machine actively refused it.
now I have no idea how to diagnose this. I don't know if I am hitting a rate limit and if I am I don't know how to avoid it.
This seems like creating this should be simple, but it is anything but.
I have a few options I can pursue
Try to create the resumable uploads with a batch operation(https://cloud.google.com/storage/docs/json_api/v1/how-tos/batch)
batch operations to /upload are not supported.
Maybe make requesting each url a one by one api call.
Make requesting the url happen over a channel (https://cloud.google.com/appengine/docs/go/channel/reference)
spend the next week or more adding layers of retries and fallback error handling.
Try another solution.
This should be simple. How should this be done?
Im writing a single-page-web-app (angularJs) and a server back-end (node.js). The communication between them is done via REST.
Currently im trying to implement the following scenario:
Upload big files from browser to S3 public bucket.
Copy uploaded file to private bucket on S3
Transcode uploaded file to HTML 5 compatible format (AWS Elastic Transcoder)
Store Meta-Object about the file in DB to access later
I'm racking my brains to get a well working design of the communication/ data-workflow between server and client, but always got stuck at the following questions?
Store file meta-object at the end or at the beginning of the process. If it is at the beginning, i have to store and handle some state information?
Who should start copying uploaded files to private bucket. Server or client? If it is the server, how can the client get informed about the job succeeded?
Who starts the transcoding process? If it is the server, how can the client get informed about the job succeeded?
How would you do this?
there is a pretty good tutorial which describes the use case you are planning to implement: http://www.bitcodin.com/blog/2015/02/create-mpeg-dash-hls-content-for-amazon-s3-and-cloudfront/
If your transcoding system has a RESTfull API (like bitcodin which is used in this tutorial, or any other service) you can do your application also client-side and use the API calls to get the state of your transcodings, etc. However, using the API you can do the same also server-side, whatever fits better for you.
I personally would store the metadata infos at the beginning of the process, as this is the point of time where you generate the "asset" in your database/CMS/etc.
I have a situation that override my knowledge. Here is situation:
A simple web based system store a Word files. Users create them locally, then upload them to server. After that, another user can download, edit and upload again. All that is okay, but that steps of repeating Download/Upload cause troubles - in case when user forgot to upload after he make changes. The prerequisites is that they want to use only Word, so i can't use any web editors like CKEditor or Google Documents.
So - a question - is there a way to let users open/save that DOC files with Word without setting a VPN?
Server is a Windows 2008, and language is ASP.NET / classicASP. User access system via browsers.
I think you can embed a plugin called aceoffix in your web system, if the customers do not have to download, upload and save back to server. With aceoffix they can edit online and save back to the server directly. It is exactly the same interface as MS Office. Hope this will be helpful.
How about a tiny app (on clients) to act as a syncronizer (using FTP) ?
I think an embedded Word viewer would be something quite complex to pull off - especially if they require the native, proper and exact Word look/menus.
One alternative is to provide a plugin to your users, where they can access/sync documents directly from/to the server. But then you aren't using the a web site but a local plugin, which comes with its own headaches of course.
Creating a Word plugin is a nice way to make it seem like something "in the Office program" when you have actually created it yourself, so that your user don't have to feel like they are using another program. My idea is that you could create a way for users to load a Word file from the server, do changes to it and then upload them back to the server automatically.
I'm trying to send a file, as an attachment, via the phone's e-mail client using Titanium Mobile. I've run into a snag where the attachment is sent, but is received as a 0-byte file.
The problem is that the file created in data/data/package/app_appdata is -rw------
From glancing at the Android SDK, this is by design. The app's "private storage" is readable only by the owner of that folder, the running application.
I presume that the Android e-mail client can see that file, but cannot read it.
The full Android SDK mentions a MODE_WORLD_WRITABLE that allows you to keep using the applicationDataDirectory and give all apps permission to read/write that file. Does an equivalent exist in Titanium Mobile?
The other solution is to create a temp file, which unfortunately uses Titanium's own naming scheme (tiXXXXX.txt), or I could write to the "external storage" since it is public (which may not always be available, however.)
This is the call I'm using to get the current file, it can be read within my app just fine, but when I use the addAttachment call of an emailDialog it simply sends a 0 byte file to me.
var f = Ti.Filesystem.getFile(Ti.Filesystem.applicationDataDirectory, "generated_filename.txt")
Have you tried using tempDirectory instead. I'm of course assuming once the file is emailed you don't need to keep it as the applicationDataDirectory is fully backed up and usually used to store data the app retains.
http://developer.appcelerator.com/apidoc/mobile/latest/Titanium.Filesystem.tempDirectory-property.html