hlssink2 gives this error out of nowhere after a few seconds of streaming:
Error received from element hlssink: Failed to delete fragment file '/tmp/gnome-networks-display/nd-segment000007.ts': Error removing file /tmp/gnome-networks-display/nd-segment000007.ts: No such file or directory.
Debugging information: ../ext/hls/gsthlssink2.c(479): gst_hls_sink2_handle_message (): /GstPipeline:nd-cc-pipeline/GstHlsSink2:hlssink
Where does it store the last stream info? Even after cleaning the directory, it starts off the segments from the number left off in the last stream (eg. from nd-segment000046.ts).
Some help here would be greatly appreciated.
Related
I have this link where you can check this whole thing: [https://colab.research.google.com/github/justinjohn0306/TalkNET-colab/blob/main/TalkNet_Training.ipynb?authuser=2#scrollTo=nfSawDUD5tqv&uniqifier=1 ]
https://www.youtube.com/watch?v=S2eYaCclnU0
So, my problem is that in Step 3, I have this error code and don't know what to do:
AssertionError Traceback (most recent call last) in 11 output_dir = "/content/drive/My Drive/talknet/name_of_character" ##param {type:"string"} 12 assert os.path.exists(dataset), "Cannot find dataset" ---> 13 assert os.path.exists(train_filelist), "Cannot find training filelist" 14 assert os.path.exists(val_filelist), "Cannot find validation filelist" 15 if not os.path.exists(output_dir): AssertionError: Cannot find training filelist
Can someone help me please? I uploaded the chopped vocals and the .txt file with the lyrics. And I don't know why can't it see it. :( I have the text file like this:
2.wav∣Take my hand.
3.wav∣Why are we.
And so on... I put everything in a .zip file and uploaded it to the Drive.
I chopped the vocals in Audacity and export it with labels.
And then started to do everything as shown in the video.
When using the PUT command in a threaded process, I often receive the following traceback:
File "/data/pydig/venv/lib64/python3.6/site-packages/snowflake/connector/cursor.py", line 657, in execute
sf_file_transfer_agent.execute()
File "/data/pydig/venv/lib64/python3.6/site-packages/snowflake/connector/file_transfer_agent.py", line 347, in execute
self._parse_command()
File "/data/pydig/venv/lib64/python3.6/site-packages/snowflake/connector/file_transfer_agent.py", line 1038, in _parse_command
self._command_type = self._ret["data"]["command"]
KeyError: 'command'
It seems to be fairly benign, yet occurs randomly. The command itself seems to run successfully when looking at the stage. To combat this, I simply catch KeyErrors when puts occur, and retry several times. This allows processes to continue as expected, but leads to issues with proceeding COPY INTO statements. Mainly, because the initial PUT succeeds, I will receive a LOAD_SKIPPED status from the COPY INTO. Effectively, the file is put and copied, but we lose information such as rows_parsed, rows_loaded, and errors_seen.
Please advise on work arounds for the initial traceback.
NOTE: An example output after running PUT/COPY INTO processes: SAMPLE OUTPUT
NOTE: I have found I can use the FORCE parameter with COPY INTO to bypass the LOAD_SKIPPED status, however, the initial error still persists, and this can cause duplication.
Good morning all,
My goal is the following:
I want to fill a text file every 5ms (with a timer).
Below you will find the function code.
Code link
In normal operation, this function works correctly as long as I go into the "REGUL_STATE_EG" state and exit it correctly. While in the "REGUL_STATE_EG" state, my file is filling correctly with no problem and on output, I get a file that is filling correctly too.
Here is the problem :
As soon as I go into the "REGUL_STATE_EG" state and a power cut occurs at some point while I'm still in that state, the data that was added to this file before the power cut occurred get totally lost and I get an empty file and I don't know why.
In theory, the data written in the file before the power cut should be saved because each time, there is opening and closing of the file but in my case I recover an empty file. There is nothing in it.
Thank you in advance for your help.
I get this error:
forrtl: severe (9): permission to access file denied, unit 900, file C:\Abaqus_JOBS\mEFT.txt
When I try to open and simultaneously create the file C:\Abaqus_JOBS\mEFT.txt:
OPEN(900, FILE = "C:/Abaqus_JOBS/mEFT.txt", action = "READWRITE", status = "UNKNOWN")
The error occurs and the file is not created. When the error occurs the file has already been created and deleted at least 100 times.
EDIT:
It seems the error is related to the fact that Windows don't delete or close the files immediately and in a parallel computation one could try to access the file when the file still exists... any ideas how to solve this issue?
I'm using Apache Camel 2.11.1
Have such route:
from("file:///somewhere/").
threads(20).
to("direct:process")
Some time I'm getting this exception: org.apache.camel.InvalidPayloadException with message
No body available of type: java.io.InputStream but has value: GenericFile[/somewhere/file.txt] of type:
org.apache.camel.component.file.GenericFile on: file.txt. Caused by: Error during type conversion from type:
org.apache.camel.component.file.GenericFile to the required type: byte[] with value GenericFile[/somewhere/file.txt]
due java.io.FileNotFoundException: /somewhere/file.txt (No such file or directory).
Since I'm seeing lot of .camelLock files in the directory, I assume this happens due to attempt of few threads to process same file. How to avoid that?
UPDATE 1
Tried to use scheduledExecutorService and remove threads(20). Seems I'm losing fewer files, but still losing them. How to avoid? Any help will be greatly appreciated.
I got the similar issue, mine was 2 file processors retrieving from same directory. Result: Losing all my files.
Here is the scenario:
Thread#1 retrieves file1, moves to process folder
Thread#2 retrieves same file: file1 at the same time. file1 is deleted
Thread#2 cannot find file1 in source directory, rename fails.
Thread#1 fails due to deleted file by Thread#2
Here is the reason:
If you check GenericFileProcessStrategySupport.renameFile method, you'll see camel first deletes target file, then renames source file to target. That's why above condition occurs
I dont know about a generic solution, either separating source-consumer relation or implement a work distributor mechanism should be implemented.
Sınce your threads live in same JVM, I suggest you to implement a concurrent load distributor. That'd give requester 1 file name at a time in concurrent way