SNOW PIPE status:
I usually validate by looking at LastForwardedMessageTImestamp > LastRecievedMessageTimestamp as "Receiving and Forwarding of Data" is complete.
In this pipe status LastRecievedMessageTimestamp is greater than LastForwardedMessageTImestamp. Does it mean SQS is not forwarding messages to SNOWPIPE?
You may read the following to get more information about Pipe Status:
https://docs.snowflake.com/en/sql-reference/functions/system_pipe_status.html
lastReceivedMessageTimestamp:
Timestamp of the last message received from the queue. Note that this message might not apply to the specific pipe, e.g., if the path/prefix associated with the message does not match the path/prefix in the pipe definition. In addition, only messages triggered by created data objects are consumed by auto-ingest pipes.
lastForwardedMessageTimestamp:
Timestamp of the last “create object” event message with a matching path/prefix that was forwarded to the pipe.
I think it explains why lastReceivedMessageTimestamp could be greater than lastForwardedMessageTimestamp.
Usually your notification_channel is the same for all the PIPES in your account.
All pipes are linked to the same SQS Queue so they all receive message when a file notification is triggered. That's the reason why all your pipes should have the same lastReceivedMessageTimestamp at a given time.
The pipes will forward the message to your given COPY instruction only if it matches with the location/pattern configured in your pipe definition.
Related
Problem
I would like to program an attachable command line interface for my daemon.
I developped a daemon running 24/7 on Linux Openwrt:
#!/bin/sh /etc/rc.common
START=98
USE_PROCD=1
PROCD_DEBUG=1
start_service() {
procd_open_instance
procd_set_param command "/myProgram"
procd_set_param respawn
procd_close_instance
}
I would like to add a debug user interfaces for test. So we could live tune some parameters/actions and print log. Something like the screen package.
Hence i want to create a command line interface for this daemon.
Research
Stdin/Stdout
Ideally i would like to write directly to the stdin of the daemon and read the stdout.
Daemon
Duplicate stdin to a file.
Duplicate stoud to a file.
Client
A launched C program by the tester.
It would relay stdin to stdinfile of daemon and stdoutfile of daemon to stdout.
Critic
That would be maybe the simplest way and I could read stdout.
I couldn't find any exemples, it makes me think i'm overlooking something.
Theres a risk I fill the flash by writing endlessly to the stdoutfile.
Pipes
The creation of 2 named pipe can be possible.
Daemon
The daemon would create a named input pipe and poll the pipe by making non blocking read.
A second output pipe is necessary to write the return of the command received.
Client
A launched C program by the tester.
It would relay stdin to input pipe and output pipe to stdout.
Critic
I don't know if I can properly redirect the stdout of the daemon to output pipe. Which means I wont be able to print the stdout logs but only specific cli coded response.
MessageQ
Same issues as pipe.
Sockets
Seems rather complex for a simple application.
Shared Memory
The paradigm does not seems appropriate.
Pty
Maybe something can be done with pseudo terminals but I don't understand them even after reading explanations: attach a terminal to a process running as a daemon (to run an ncurses UI)
Screen/Tmux
I don't have screen or tmux in my repository.
Question
What is the proper way to create a CLI for a daemon ? Where could I find an exemple ?
I would use a Unix domain stream socket, with the CLI thread in a blocking accept() until a connection is obtained.
These sockets are bidirectional, and you can write a trivial CLI application to read from standard input to the connected socket, and from the connected socket to standard output. (That same trivial CLI program could be used to redirect the output over e.g. SSH to ones local computer with much more storage, running the CLI program remotely using something like ssh -l username openwrt-device.name-or-address cli-program | tee local-logfile. OpenWrt devices often don't have suitable storage for log files, so this can be very useful.)
Use vdprintf() to implement your own printf() that writes to the connected CLI.
Because sockets are bidirectional, if you want to use locking –– for example, to avoid mixing logging output and CLI responses ––, use a mutex for writing; the read side does not need to take the mutex at all.
You cannot really use <stdio.h> FILE * stream handles for this, because its internal buffering can yield unexpected results.
Assuming your service daemon uses sockets or files, it can be very useful to reserve the file descriptor used for the bidirectional CLI connection, by initially opening /dev/null read-write (O_RDWR). Then, when the connection is accept()ed, use dup2() to move the accepted connection descriptor to the reserved one. When the connection is to be closed, use shutdown(fd, SHUT_RDWR) first, then open /dev/null, and dup that descriptor over the connection to be closed. This causes the connection to be closed and the descriptor to be reopened to /dev/null, in an atomic manner: the descriptor is never "unused" in between. (If it is ever close()d in a normal manner, another thread opening a file or socket or accepting a new connection may reuse that descriptor, causing all sorts of odd effects.)
Finally, consider using an internal (cyclic) buffer to store the most recent output messages. You do not need to use a human-readable format, you can use e.g. the first character (codes 1 to 254) to encode the severity or log level, keeping NUL (0) as the end-of-string mark, and 255 as the "CLI response" mark, so that your CLI program can use e.g. ANSI colors to color the output if output is a terminal. (For example, "\033[1;31m" changes output to bright red, and "\033[0m" returns the output back to normal/default. The \033 refers to a single character, code 27, ASCII ESC.) This can be very useful to efficiently indicate the priority/severity of each separate output chunk to the human user. The Linux kernel uses a very similar method in its kernel logging facility.
In my process, I am creating a child process and running a binary with execl() API. In parent proces calling waitpid() and waiting for child to exit. This binary opens "/etc/resolv.conf" and try to connect DNS IP. If DNS ip is not reachable, the child process block for long time. Due to that parent process timeout. I do not have source code of binary and I do not want to change anything to /etc/resolve.conf as this file is used by other process.
Is there any way, I can remove or restrict access of '/etc/resolve.conf' to my child process.
It is not easy to prevent the access to /etc/resolv.conf. But you can tell the resolver the number of attempts to perform for DNS name resolving through the environment variable RES_OPTIONS. Even zero attempts are a valid value there and causes name resolution to fail instantly.
See for example:
RES_OPTIONS="attempts:0" telnet www.google.de
telnet: could not resolve www.google.de/telnet: Temporary failure in name resolution
This means, in your prgram, you could do
...
putenv("RES_OPTIONS=attempts:0");
execl(...);
...
This should cause the resolving to fail instantly and your process should proceed.
I'm doing a lab assignment where we make a Server program and a Client program. Its on the QNX OS. Not sure if it runs in Linux. The outline is this:
"Write a pair of C programs msgSender.c and msgLogger.c to demonstrate Neutrino message passing between processes.
Your programs will be called from the shell as:
$ msgLogger logFileName
$ msgSender msgLogger
logFileName is the name of the log that stores the messages
The msgLogger process acts as a logger. It receives messages and writes the messages to a file.
msgLogger receives text-based messages of the format shown in msg.h. It must test the message header and only write message text to the logFile if the message type is MSG_DATA.
If MSG_DATA is received, the reply status is MSG_OK.
If a MSG_END is received, the server replies with a status of MSG_END and then cleans up and exits.
If the received message is not MSG_DATA or MSG_END, the reply status is MSG_INVALID and the message text is not logged. A warning message is logged.
This process advertises its presence by writing its ND PID CHID to a file named msgLogger.pid where the "msgLogger" part of the filename is taken from argv[0].
The logged messages are stamped with the time and the ND PID COID of the sender.
The msgSender is an interactive program that assembles and sends the text-based message.
Reads the name of the logger process from the command line and uses this name to build the name of the .pid file where it reads the ND PID CHID.
It prompts the user for the message header type and then for the text of the message.
It will exit if it receives a MSG_END from the server.
It prints a warning if MSG_INVALID is received from the server
Your client and server must interoperate with my client and server.
Validate that the server works properly with multiple concurrent clients.
If you flush your server's file write buffer after each logging message, you can run it in the background and use
$ tail -f logFile
to view the messages as they are received.
Be sure to check the validity of the command-line argument.
Use global variables only when necessary.
"
I've got the msgLogger fully working; here is the code:
http://pastebin.com/8AGfGZ5u
And here is the msg.h file:
http://pastebin.com/3xcBZvnH
And here is the code I have so far for msgSender:
http://pastebin.com/Buk88Kry
What the sender (client) needs to do, is let the user enter the message type using digits. The msg.h file contains the type of message numbers with MSG_DATA being 1, etc. If they enter an invalid digit, it'll ask them to try again, else it will store that digit and assign it to the amsg.m_hdr of the MESSAGE struct. amsg.m_data is the value with the message.
Then the user enters the message they want, and if they chose the digit 1 (msg_data), the server sends notification and the client prints "message successfully received", while the contents of the message are saved to the log file.
Unfortunately I'm having a bunch of problems and it's not logging the message. I have to hand in the msgSender tomorrow, and it's also dependent on my next lab. I really hope I can get some help on this.
Try flushing the buffer after the client writes. If you don't close the file descriptor while writing to the file, there's no gaurentee that the buffer was flushed and the file was written. You can ensure that all your writes sync to the file whne you want by calling fsync().
I'm working on my homework which is to replicate the unix command shell in C.
I've implemented till single command execution with background running (&).
Now I'm at the stage of implementing pipes and I face this issue, For pipes greater than 1, the children commands with pipe are completed, but the final output doesn't get displayed on stdout (the last command's stdin is replaced with read of last pipe)
dup2(pipes[lst_cmd], 0);
I tried fflush(STDIN_FILENO) at the parent too.
The exit of my program is CONTROL-D, and when i press that, the output gets displayed (also exits since my operation on CONTROL-D is to exit(0)).
I think the output of pipe is in the stdout buffer but doesn't get displayed. Is there anyother means than fflush to get the stuff in the buffer to stdout?
Having seen the code (unfair advantage), the primary problem was the process structure combined with not closing pipes thoroughly.
The process structure for a pipeline ps | sort was:
main shell
- coordinator sub-shell
- ps
- sort
The main shell was creating N pipes (N = 1 for ps | sort). The coordinator shell was then created; it would start the N+1 children. It did not, however, wait for them to terminate, nor did it close its copy of the pipes. Nor did the main shell close its copy of the pipes.
The more normal process structure would probably do without the coordinator sub-shell. There are two mechanisms for generating the children. Classically, the main shell would fork one sub-process; it would do the coordination for the first N processes in the pipeline (including creating the pipes in the first place), and then would exec the last process in the pipeline. The main shell waits for the one child to finish, and the exit status of the pipeline is the exit status of the child (aka the last process in the pipeline).
More recently, bash provides a mechanism whereby the main shell gets the status of each child in the pipeline; it does the coordination.
The primary fixes (apart from some mostly minor compilation warnings) were:
main shell closes all pipes after forking coordinator.
main shell waits for coordinator to complete.
coordinator closes all pipes after forking pipeline.
coordinator waits for all processes in pipeline to complete.
coordinator exits (instead of returning to provide duelling dual prompts).
A better fix would eliminate the coordinator sub-shell (it would behave like the classical system described).
I am creating a small C program which will find first unattached dtach session and attach to it. However dtach does not provide a way to check unattached/attached status. Is it possible to get this information at all? (For example by directly reading sockets created by dtach?)
use lsof to check how many dtach processes opened the socket file.
if process number > 1, then the socket is attached.