My csv file have 3941495 record, I have checked my file, it has exactly 3941495 record after was compressed, but the result on server is 3941489 record. I got status "CompletedWithWarnings", and this is status message
The job was completed successfully, but failed in the part of the line. From data monitoring to download the error log, please check the failed line.
Does anyone know how to fix it? Or how to download the error log?
If you use Salesforce Analytics you can monitor an External Data Upload on UI. According documentation (see page 114):
When you upload an external data file, Wave Analytics kicks off a job
that uploads the data into the specified dataset. You can use the data
monitor to monitor and troubleshoot the upload job. The Jobs view
of the data monitor shows the status, start time, and duration of each
dataflow job and external data upload job. It shows jobs for the last
7 days and keeps the logs for 30 days.
In Wave Analytics, click the gear button and then click Data Monitor
to open the data monitor. The Jobs view appears by default. The Jobs
view displays dataflow and upload jobs. The Jobs view displays each
upload job name as . You can hover a job to
view the entire name.
To see the latest status of a job, click the Refresh Jobs button.
To view the run-time details for a job, expand the job node. The
run-time details display under the job. In the run-time details
section, scroll to the right to view information about the rows that
were processed.
To troubleshoot a job that has failed rows, view the error message.
Also, click the download button in the run-time details section to
download the error log.
Note: Only the user who uploaded the external data file can see the
download button. The error log contains a list of failed rows.
Related
I am getting the following error while trying to upload a dataset to Hub (dataset format for AI) S3SetError: Connection was closed before we received a valid response from endpoint URL: "<...>".
So, I tried to delete the dataset and it is throwing this error below.
CorruptedMetaError: 'boxes/tensor_meta.json' and 'boxes/chunks_index/unsharded' have a record of different numbers of samples. Got 0 and 6103 respectively.
Using Hub version: v2.3.1
Seems like when you were uploading the dataset the runtime got interrupted which led to the corruption of the data you were trying to upload. Using force=True while deleting should allow you to delete it.
For more information feel free to check out the Hub API basics docs for details on how to delete datasets in Hub.
If you stop uploading a Hub dataset midway through your dataset will be only partially uploaded to Hub. So, you will need to restart the upload. If you would like to re-create the dataset, you can use the overwrite = True flag in hub.empty(overwrite = True). If you are making updates to an existing dataset, you should use version control to checkpoint the states that are in good shape.
I am new in the field of Azure. I am testing the functionality called "Send, receive, an batch process messages in Azure Logic Apps". This is the link of the documentation:
Batches in Logic Apps
I could do everything what in that tutorial exists and it works. I created a batch called "Test" (This is the Batch name). My question is: Is there a monitor in Azure portal where I can see which messages were created in that batch from the "batch sender" and therefore see the current status of these messages?.
In other words I would like to see which message was already processed by the "batch receiver" and which ones still remain to be processed. I would like to know if I can monitor this batch that I created.
I would like to see which message was already processed by the "batch receiver"
As of now there is no way we can alone monitor the batch process but Here is one of the workarounds for this, You can add a compose connector and add all the parameters as per the requirement.
Here is my logic app workflow
So each time this logic app gets triggered by the batch process we can check all the properties that we have mentioned for that particular message.
Here is the sample output:
You can check the processed message by the "batch receiver" from the run history of the logic app.
and therefore see the current status of these messages?.
You can monitor this from Azure Monitor logs. which gives number runs that are running, succeeded and failed.
and which ones still remain to be processed.
This depends on from where you are trying to send the messages
You can follow this document for Monitor logic apps by using Azure Monitor logs
I have a azure logic app that monitors my emails and when target is found, it drops the attachment into blob storage. The plan is a consumption plan.
The issue is, sometimes it takes up to 50 minutes for the email to be grabbed and dropped. I know there is a startup time when things go idle, but I was reading seconds/minutes. Not close to an hour. Does anyone know how I can trouble shoot this?
sometimes it takes up to 50 minutes to grab and drop the email
Based on this doc ,
The reason for delay is:
When the triggers encounter a new file, it will try to ensure that the new file is completely written. For instance, it is possible that the file is being written or modified, and updates are being made at the time the trigger polled the file server. To avoid returning a file with partial content, the trigger will take note of the timestamp such files which are modified recently, but will not immediately return those files. Those files will be returned only when the trigger polls again. Sometimes, this may lead a delay up to twice the trigger polling interval. This also means that the trigger does not guarantee to return all files in a single run when "Split On" option is disabled.
For more information you can refer this:
. Automate tasks to process emails by using Azure Logic Apps | MS DOC, .
.How to Send an Email with one or more attachments after getting the content from Blob storage? | SO Thread & Logic app Created with add email attachments in Blob storage .
One of the requirements I have is to generate flat files in a specific format. The user selects the year from the UI and clicks the generate button.
The flat files process usually takes 3 to 4 hours to generate all the files. When the process is running and flat files are being created, the UI shows a modal that the job is being processed.
The problem is that after the files are successfully generated, the UI redirects to the login screen. Instead I want to refresh the UI showing the message that the process has successfully completed.
I am looking for help on this. Also would increasing the conversation timeout or session timeout in web.xml help fix this issue?
yes you could increase both session timeout and conversation timeout (if doing work in conversation scope) so they exceed the duration of the job
a better solution may be to store information on the jobs in a higher scope (eg. application or to the database), then if the user accidently logs out the job will continue running and complete
I created a web service and a mobile application that communicate between each other. When everything is working, it works great. When the server doesn't respond, it starts to break down.
The mobile device sends a message to the server with a bunch of records. Getting the records on the server never seems to be a problem. It gets the records and then sends a response back to the mobile device that the update was received. The PROBLEM is that the mobile device doesn't always get the response, so it doesn't know it shouldn't send those records again for updating.
Next time it sends the records again and now I have duplicate records. How can I solve this?
Idea 1) Create a transaction number unique on the mobile device that I can compare against the server to see if the record was already uploaded. Then just don't write that record and attempt to send back the response that it was written.
Idea 2) Send the records to the server, but before writing them respond to the mobile device that they were received. This way the mobile device can tag them and then send another response to the server telling it to write them. At the point the mobile device almost doesn't care if it gets a response. Only thing, you don't know if the server ever got the message.
Looking for ideas on how to handle this that either confirm one of these ideas or has a completely different one.
I ended up creating logs that the device attempts to resolve when it gets back successful responses from the server.
I tag items as a batch of lines and send them up to the server. Once they are up there, I create a log about the success or failure of each line item in a batch of items and then save the log to the file system.
When the mobile device is unsuccessful in hearing back a response from the server, in rare cases, it asks the server about a batch number. If the server doesn't respond with a status of that batch, it assumes the server never received it and remarks those items for another upload attempt. If it hears back, it processes the success and failure line by line and then marks the items on the mobile device accordingly. If the mobile device doesn't ask about the log in the next upload, the server assumes the batch's lifecycle is complete and it no longer needs to maintain that log. It is then deleted.
The server doesn't delete a log until it has a successful request from the specific device no longer asking to hear about the log. So if I have log 1 on the server and the device doesn't ask in the next upload to hear back about that log, the server then removes that log assuming the device got the response it wanted or doesn't care about it anymore.