Question about taos shell use source command - tdengine

What is the maximum file size for taos source 'filename' command? For my project, the data file is huge and I'm afraid of being interrupted within data transferring.

In general, there is no limitation in TDengine, but I think you should take care of your system hardware capabilities.

Related

How to read txt file from FTP location in MariaDB?

I am new to MariaDB and need to do below activity.
We are using MariaDB as datatbase and we need to read a txt file from ftp location. Then load into a table. This has to be scheduled to read the file on a regular interval.
After searching I got LOAD DATA INFILE to be used, but it has the limitation that, it can't be used in Events.
Any suggestions/samples on this would be great help.
Thanks
Nitin
You import it and read it using the local path, MariaDB does basic file support, in no case it supports FTP transactions
LOAD DATA can only read a "file". But maybe the OS can play games...
What Operating System? If the OS can hide the fact that FTP is under the covers, then LOAD DATA will be none the wiser.

Does Windows ftp process commands at the same time or in sequence?

I'm having trouble finding the answer to this question, maybe I'm just not asking the question properly. I have to put a file that is relatively large (~500MB at least) in an ftp server and then run a process that takes it in as a parameter. My question is as follows. If i'm using ftp.exe to do this, does the put command lock the process until the file is finished being copied?
I was planning on using a .bat file to execute the commands needed but I don't know if the file is going to be completely copied before the other process starts reading it.
edit: for clarity's sake, here is a sample of the .bat that I would be executing.
ftp -s:commands.txt ftpserver
and the contents of the commands.txt would be
user
password
put fileName newFileName
quote cmd_to_execute
quit
The Windows ftp.exe (as probably all similar scriptable clients) executes the commands one-by-one.
No parallel processing takes place.
FTP as a protocol doesn't specify placing a lock on the files before writing it. However this doesn't prevent anyone from implementing this feature as it is a great value add.
Some FileSystems NTFS) may provide locking mechanism to prevent concurrent access. See this File locking - Wikipedia
See this thread as a reference: How do filesystems handle concurrent read/write?

How to load a file in GT.M?

we tried loading a file in GT.M. we started off by invoking mupip and then load command. it read the file but shows an error. do we need to define a schema? if yes how?
It will be easier to assist you if you post the error you see, and also what steps you have taken to troubleshoot it that may be suggested by the GT.M Messages and Recovery Procedures manual (go to http://fis-gtm.com and click on the User Documentation tab).
It might also be clear if you clarify what you mean by "load a file". You could be talking about running a program (routine) that is stored as a host operating system file such as "myprogram.m", or you could be talking about loading a file full of data stored in, perhaps a comma seperated value format, in a host operating system file such as "mydata.csv".
Also (admittedly unlikely) you could be talking about loading a VistA FileMan File, that is stored as a host operating system file formatted as a KIDS build such as in "package.kid" where the FileMan data dictionary (a form a schema) is stored in that KIDS format.

How to write in memory data to DVD on Linux using C?

I've data in MP4 format which needs to be copied to DVD on Linux platform. Now I am creating MP4 file on hard disk and then burning that file to DVD using growisofs command.
It would be more efficient if I didn't have to write the MP4 data to hard disk before they are burned to DVD. Please let me know if there is a way to write in memory data to DVD using C program.
By reimplementing the tasks growisofs performs. DVDs are different to randomly accessible storage. First the data to be burnt onto the blank medium must be prepared into in a certain format, namely ISO9660, this includes a certain error correction scheme. The result of this is a complete Track. In the ISO9660 scheme it's not possible to record single files, only whole file systems. Once you got the FS you must implement the whole program for controlling the recording process.
This is what growisofs does. Now you could take the source of growisofs and replace the code it uses to read the files with code to read from some shared memory. But then you must make sure, that your program can deliver the data continously, without falling into pauses. Once started, the recording process should not be interrupted.
Anyway: If you're under Linux your program could provide the filesystem structure through FUSE.

Read and Write a file in hadoop in pseudo distributed mode

I want to open/create a file and write some data in it in hadoop environment. The distributed file system I am using is hdfs.
I want to do it in pseudo distributed mode. Is there any way I can do this. Please give the code.
I think this post fits to your problem :-)
Writing data to hadoop

Resources