i am using apache camel to convert image url to binary data.
because of security area, i cant download image file directly so i have to use camel using download file. Client is spring boot.
Server send this binary data and client receive it.and download image file on that client folder.
but how can i handle binary data? i want to download image file between spring boot client and Apache camel
here is my process
Related
We are planning our final school project and I need to find out a way to send images from server to client (Flutter App). Due to a lack of experience in a professional environment, I'm struggling to do so.
I've always saved the image name or image path in the database in my smaller projects, got the data via an API, and then just called for the image, which was located on a web server, via HTTP or HTTPS. Pretty easy in Flutter with Image.Network.
However, that doesn't sound like the best option
We are planning on using:
Ubuntu or Microsoft Server (still to decide)
MariaDB alone or with MongoDB, or even MS SQL Server(still to decide)
ASP.NET Core for the API
Flutter App and Web-Interface for client-side
Any suggestions are appreciated!
You are doing correctly in your smaller projects. This is a best way to do. When frontend(mobile app or web app) uploads image using an API, backend(in your case ASP.NET Core) simply stores that in server(in your case case Ubuntu or Microsoft Server). But I would say stores all media files like audio, video, images, documents, etc on AWS S3 bucket because it would be difficult to you increase server disk space if its low where AWS-S3 can store any amount of data.
And after saving that media files on S3 or server store its file url in database. Send this url via API to client when it requests it and from client side you just need to use that url to show or download.
I am new to apache kafka and apache spark. I want to integrate the kafka with my angularjs code. Basically I want to make sure that, when a user click on any link or searches anything on my website, then the those searches and clicks should be triggered as an event and send it to the kafka data pipe for the use of analytics.
My question is how can I integrate frontend code which is in angular.js to apache kafka?
Can I send the searches and click stream data directly to apache spark using kafka pipeline or do I need to send those data to kafka and apache spark will do polling to kafka server and receive the data in batches?
I don't think (just cannot find at glance) there is Kafka client for front-end JavaScript. I cannot actually imagine stable setup when millions of producers (each client's browser) writing to the same Kafka topic.
What you need to do in Angular, is to call your server side function to log your events in Kafka.
Server side code may be written in a bunch of languages, including JavaScript for node.js.
Please take a look for available clients at Kafka Documentation
Update 2019: There are several projects implementing REST over HTTP(s) proxy for producer and consumer interfaces. For example Kafka Rest project (source). Never tried these by myself though.
I have uploaded my files in folder name uploads using the ng-file-upload with node.js and angularjs. Now I want to download those files whichever is clicked.
Now i have two Queries
How to show those files at client side
how to get the response i.e URL of the stored location of files so i can try to download.
Or is there any other process to show and download the files using the angularjs and nodejs.
Any hint will great help.
I am developing an embedded tool in my ADF fusion web application which will read and display the log messages of weblogic server.I discovered one file named DefaultDomain.log in AppData\Roaming\JDeveloper\system11.1.1.7.40.67.80\DefaultDomain\servers\DefaultServer\logs folder which contains all the log messages.
Now I want to write some logic to read the contents of this file dynamically and display it through the tool(ADF Faces).Is it possible to read this log messages through my java application?
If yes how to get the log file name from the weblogic server.I think the name of the log file is configurable in the server.This application is to be installed in various environments like clustered environment and also in activation /passivisation on
I have Amazon S3 as my file storage server and EC2 instances as my application logic server. Now my app needs to upload some files which need necessary processing. I can think of two ways to do this:
Upload the file directly from mobile devices and get the file name and location(url), then send the url to my backend. Backend get the file by URL and do its job.
Send the file to backend using a multipart form, backend accepts the file do its job and finally save the file to Amazon S3.
Which is the standard way? What are the reasons?
Sending the object direct to Amazon S3 will be more scalable, less error-prone and cheaper (since you need less web server capacity to handle the uploads). Send the corresponding information to a Simple Queueing Service (SQS) queue that the back-end service can monitor and process. That way, if your back-end is ever offline, the jobs will simply queue-up and will get processed when the server is running again. A good use of loose coupling.
A third option would be to send the file directly from your mobile to Amazon S3, using Metadata fields to identify originating user, and then configure the S3 bucket to trigger some code in AWS Lambda that can process the file. This could do all the processing, or could simply trigger a process on your web server. Again, this reduces load on the web server and would not require sending a message to trigger the processing.