I am trying to select a local json file and load it in my blazor client component.
<input type="file" onchange="LoadFile" accept="application/json;.json" class="btn btn-primary" />
protected async Task LoadFile(UIChangeEventArgs args)
{
string data = args.Value as string;
}
P,S I do not understand , do i need to keep track both the name of the file and the content when retrieving it ?
I guess you're trying to read the contents of a JSON file on the client (Blazor), right? Why not on the server !?
Anyhow, args.Value can only furnish you with the name of the file. In order to read the contents of the file, you can use the FileReader API (See here: https://developer.mozilla.org/en-US/docs/Web/API/FileReader). That means that you should use JSIntrop to communicate with the FileReader API. But before you start, I'd suggest you try to find out if this API have been implemented by the community (something like the localStorage, etc.). You may also need to deserialize the read contents into something meaningful such as a C# object.
Hope this helps...
There is a tool that can help, but it currently doesn't support the 3.0 preview. https://github.com/jburman/W8lessLabs.Blazor.LocalFiles
(no affiliation with the developer)
The input control will give you the location of the file as a full path along with the name of the file. Then you still have to retrieve the file and download it to the server.
Late response but with 3.1 there is an additional AspNetCore.Components module you can download via NuGet to get access to HttpClient extensions. These make it simple:
// fetch mock data for now
var results = await _http.GetJsonAsync<WellDetail[]>("sample-data/well.json");
You could inject the location of the file from your input control in place of the "sample-data/well.json" string.
Something like:
using Microsoft.AspNetCore.Components;
private async Task<List<MyData>> LoadFile(string filePath)
{
HttpClient _http;
// fetch data
// convert file data to MyData object
var results = await _http.GetJsonAsync<MyData[]>(filePath);
return results.ToList();
}
Related
I have ASP.NET Core MVC back-end api. One controller returns File from server. Is there a way to make request to api route by [href] attribute of <a> tag? Looks like it tries to call React route but not make a request to server.
Also I made AJAX call to that controller and got back file as a string (screenshot is attached). Why is it a string, shouldn.t it be a byte array? How to build back file from that string? (it's a .pdf file). I have an empty PDF if use JavaScript new File([], 'name', {options}).
ASP.NET Core controller returns PDF this way:
return PhysicalFile(Path.GetFullPath(relativePath), "application/pdf", reportName);
In React I receive it as a string this way:
let stringPDFBinary = await ReportService.getReport(id, reportFileName)
I just need to download file from api by any way.
So, the answer is here: PDF is blank when downloading using javascript
The same problem. Let it be one more topic, easier to find for others. The AJAX response is encoded string. In request config set 'responseType = 'arraybuffer'' somehow and receiving pdf will not be blank. Solved.
I Just copied and pasted from the code source. The problem seems to be the same that i had:
Asp net controller:
[HttpGet]
[Route("File")]
[AllowAnonymous]
public IActionResult GetFile(string key)
{
var file = (FileCacheValue)_fileCache.Cache[key.Replace(" ", "+")];
if (file == null)
return NotFound();
Response.Headers["content-disposition"] = $"inline;filename={file.Name}.pdf";
return File(file.Data, "application/pdf");
}
In this case comes from a cache system. The data is a byte array.
Front-end React:
const onClick = () =>
{
window.open(pdfByteArray, '_blank', 'fullscreen=yes');
}
Exactly what i have. I just put the data on a new window and open the pdf.
The Ajax part is straight forward, get the value from the response and set it on a variable
For web browsers (such as Chrome, IE, or Firefox), when you select a file using a "Choose File" button, where does the file's data get stored?
The file name shows in the browser, but does the data of the file get stored anywhere or is just a link to the file put somewhere, such as in the browser or a temporary file?
To clarify: I want to know where the file's data get's stored BEFORE submitting. JUST after the file is selected from the client's PC an not anything else is done.
After you select a file. I believe the client (browser) just stores a reference to the file location on the user's computer. It takes a combination of js and html to post the file to the server. Via a Multi/form-data Post.
In this case, on the server, you may have to store the file to a temp location of your choosing, until you're able to process it (i.e. transform and/or store to a Datastore).
In newer browsers you can use the FormData object and xhr to post to the server which is a lot cleaner.
This FormData object is used to construct the key/value pairs which form the data payload for the xhr request.
// Create a new FormData object.
var formData = new FormData();
In this case, once the file bytes are posted to the server, you can do whatever you want with that data. Typically I'll store it as a blob in the DB.
This approach will allow you to keep it all in memory. People make the mistake of trying to store on the server file system. In some multipart form post, you might have to do it this way, however.
Here's some of my web api upload code when using XHR.
I've also called this API route using an iframe (ugh!) in order to support IE8 and older. POS browsers!
/// <summary>
/// Upload the facility logo.
/// </summary>
/// <returns></returns>
[HttpPost]
[Route("logo")]
public HttpResponseMessage Logo()
{
int newImageId = -1;
var uploadedFiles = HttpContext.Current.Request.Files;
if (uploadedFiles.Count > 0)
{
var file = uploadedFiles[0];
if (!file.IsImage())
{
// "The uploaded file must be a .jpg, .jpeg, or .png"
return
Request.CreateResponse(HttpStatusCode.UnsupportedMediaType,
"unsupported");
}
var facilityRepository = new FacilityRepository();
var logoBytes =
StreamCopier.StreamToByteArray(file.InputStream, file.ContentLength);
newImageId = facilityRepository.InsertLogoImage(logoBytes);
}
return Request.CreateResponse(HttpStatusCode.OK,
newImageId);
}
I am using the PDF.js library to display PDf files within my site (using the pdf_viewer.js to display documents on-screen), but the PDF files I am displaying are confidential and I need to be able to show them within the site but block non-authorized public folks from being able to view the same files just by typing in theie URLs and seeing them show up right in their browser.
I tried to add the Deny from all line in my htaccess file, but that also of courfse blocked the viewer from showing the docs, so that seems to be a no-go. Clearly anyone could simply look at inspector and see the pdf file that is being read by the viewer, so it seems a direct URL is not going to be secure in any way.
I did read about PDF.js being able to read binary data, but I have no knowledge of how I might read in a PDF in my own file system and prep it for use by the library, eveen if that means it is all a bit slower in loading to get the file contents and prep it on the fly.
Anyone have a solution that allows PDFJS to work without revealing the source PDF URL, or to otherwise read the file using local file calls?
Okay, after some testing, the solution is very easy:
Get the PDF data using an Ajax-called function that can figure out what actual file is to be viewed.
In that PHP file...
Read the file into memory, using fopen and fread normally.
Convert to base64 using the base64_encode
Pass that string back to the calling Javascript.
In the original calling function, use the following to convert the string to a Uint array and then pass that to the PDFJS library...
## The function that turns the base64 string into whatever a Uint8 array is...
function base64ToUint8Array(base64) {
var raw = atob(base64);
var uint8Array = new Uint8Array(raw.length);
for (var i = 0; i < raw.length; i++) {
uint8Array[i] = raw.charCodeAt(i);
}
return uint8Array;
}
## the guts that gets the file data, calls the above function to convert it, and then calls PDF.JS to display it
$.ajax({
type: "GET",
data: {file: <a file id or whatever distinguishes this PDF>},
url: 'getFilePDFdata.php', (the PHP file that reads the data and returns it encoded)
success: function(base64Data){
var pdfData = base64ToUint8Array(base64Data);
## Loading document.
PDFJS.getDocument(pdfData).then(function (pdfDocument) {
## Document loaded, specifying document for the viewer and
## the (optional) linkService.
pdfViewer.setDocument(pdfDocument);
pdfLinkService.setDocument(pdfDocument, null);
});
}
});
I want to parse Swagger data from the JSON I get from {service}/swagger/docs/v1 into dynamically generated .NET class.
The problem I am facing is that different APIs can have different number of parameters and operations. How do I dynamically parse Swagger JSON data for different services?
My end result should be list of all APIs and it's operations in a variable on which I can perform search easily.
Did you ever find an answer for this? Today I wanted to do the same thing, so I used the AutoRest open source project from MSFT, https://github.com/Azure/autorest. While it looks like it's designed for generating client code (code to consume the API documented by your swagger document), at some point on the way producing this code it had to of done exactly what you asked in your question - parse the Swagger file and understand the operations, inputs and outputs the API supports.
In fact we can get at this information - AutoRest publically exposes this information.
So use nuget to install AutoRest. Then add a reference to AutoRest.core and AutoRest.Model.Swagger. So far I've just simply gone for:
using Microsoft.Rest.Generator;
using Microsoft.Rest.Generator.Utilities;
using System.IO;
...
var settings = new Settings();
settings.Modeler = "Swagger";
var mfs = new MemoryFileSystem();
mfs.WriteFile("AutoRest.json", File.ReadAllText("AutoRest.json"));
mfs.WriteFile("Swagger.json", File.ReadAllText("Swagger.json"));
settings.FileSystem = mfs;
var b = System.IO.File.Exists("AutoRest.json");
settings.Input = "Swagger.json";
Modeler modeler = Microsoft.Rest.Generator.Extensibility.ExtensionsLoader.GetModeler(settings);
Microsoft.Rest.Generator.ClientModel.ServiceClient serviceClient;
try
{
serviceClient = modeler.Build();
}
catch (Exception exception)
{
throw new Exception(String.Format("Something nasty hit the fan: {0}", exception.Message));
}
The swagger document you want to parse is called Swagger.json and is in your bin directory. The AutoRest.json file you can grab from their GitHub (https://github.com/Azure/autorest/tree/master/AutoRest/AutoRest.Core.Tests/Resource). I'm not 100% sure how it's used, but it seems it's needed to inform the tool about what is supports. Both JSON files need to be in your bin.
The serviceClient object is what you want. It will contain information about the methods, model types, method groups
Let me know if this works. You can try it with their resource files. I used their ExtensionLoaderTests for reference when I was playing around(https://github.com/Azure/autorest/blob/master/AutoRest/AutoRest.Core.Tests/ExtensionsLoaderTests.cs).
(Also thank you to the Denis, an author of AutoRest)
If still a question you can use Swagger Parser library:
https://github.com/swagger-api/swagger-parser
as simple as:
// parse a swagger description from the petstore and get the result
SwaggerParseResult result = new OpenAPIParser().readLocation("https://petstore3.swagger.io/api/v3/openapi.json", null, null);
Most of the solutions I've come across for Sharepoint doc library uploads use the HTTP "PUT" method, but I'm having trouble finding a way to do this in Silverlight because it has restrictions on the HTTP Methods. I visited this http://msdn.microsoft.com/en-us/library/dd920295(VS.95).aspx to see how to allow PUT in my code, but I can't find how that helps you use an HTTP "PUT".
I am using client web-services, so that limits some of the Sharepoint functions available.
That leaves me with these questions:
Can I do an http PUT in Silverlight?
If I can't or there is another better way to upload a file, what is it?
Thanks
Figured it out!! works like a charm
public void UploadFile(String fileName, byte[] file)
{
// format the destination URL
string[] destinationUrls = {"http://qa.sp.dca/sites/silverlight/Answers/"+fileName};
// fill out the metadata
// remark: don't set the Name field, because this is the name of the document
SharepointCopy.FieldInformation titleInformation = new SharepointCopy.FieldInformation
{DisplayName =fileName,
InternalName =fileName,
Type = SharepointCopy.FieldType.Text,
Value =fileName};
// to specify the content type
SharepointCopy.FieldInformation ctInformation = new SharepointCopy.FieldInformation
{DisplayName ="XML Answer Doc",
InternalName ="ContentType",
Type = SharepointCopy.
FieldType.Text,
Value ="xml"};
SharepointCopy.FieldInformation[] metadata = { titleInformation };
// initialize the web service
SharepointCopy.CopySoapClient copyws = new SharepointCopy.CopySoapClient();
// execute the CopyIntoItems method
copyws.CopyIntoItemsCompleted += copyws_CopyIntoItemsCompleted;
copyws.CopyIntoItemsAsync("http://null", destinationUrls, metadata, file);
}
Many Thanks to Karine Bosch for the solution here: http://social.msdn.microsoft.com/Forums/en/sharepointdevelopment/thread/f135aaa2-3345-483f-ade4-e4fd597d50d4
What type of SharePoint deployment and what version of silverlight? If say it is an intranet deployment you could use UNC paths to access your document library in sharepoint and the savefiledialog/openfiledialog available in Silverlight 3.
http://progproblems.blogspot.com/2009/11/saveread-file-from-silverlight-30-in.html
or
http://www.kirupa.com/blend_silverlight/saving_file_locally_pg1.htm
Silverlight has restrictions on what it can do with local files, though I've read that silverlight 4 has some changes.
http://www.wintellect.com/CS/blogs/jprosise/archive/2009/12/16/silverlight-4-s-new-local-file-system-support.aspx