Stackdriver Trace PHP: How to send spans in the background? - google-app-engine

According to https://cloud.google.com/trace/docs/setup/php, App Engine flexible environment for PHP can run a daemon that sends trace spans to Stackdriver in the background rather than as part of the request processing (which could cause increased response latency).
I am running Kubernetes Engine, but would still like to send trace requests in the background. Therefore:
Is it possible to run that batch daemon myself?
Out of curiosity, how does the Stackdriver PHP Exporter pass these spans to the daemon? I tried to search for that in the source code, but could not find out how it is done.
If #1 is not possible, is there another way to perform span sending in the background?
Stackdriver Trace with Google Cloud Run seems to cover a similar topic, but does not address how to run the daemon manually.

In case anyone else is looking for this, I was able to run the batch daemon as follows:
sudo -u www-data -E vendor/bin/google-cloud-batch daemon
Note that the daemon itself must be run as the same user as your “serving” PHP processes in order to access SysV memory shared between both, hence the sudo.
You will also need the PHP sysv and pcntl extensions enabled.

Related

How to See Log or Sysout in Flink Standalone

I run my application in Flink standalone, but can't find it's sysout in console or FLINK_HOME/log.
Does anyone know where I can see my application debug log? And how to know which TMs my application run on?
When running a Flink application in standalone mode on a cluster, everything that is logged to system out or system err goes into the respective local log/ directories.
So for getting the logs, you have to connect (for example using SSH) to the machines running TaskManagers and retrieve the logs from there.
And how to know which TMs my application run on.
The JobManager web interface (running on host:8081 by default) shows where the tasks are deployed to.
When the parallelism == number of slots, the tasks usually run on all machines.

How does Check_MK work with Nagios?

Hi I have just installed a clean copy of Nagios and Check_MK. But I don't understand how they work together. Nagios uses nrpe to connect to clients and performs checks. This means that some Nagios plugins have to sit on the client and return results from when they are called. But how does Check_MK tie into Nagios. Does it use check_mk_agent to replace all the Nagios plugins to perform its checks? Also does the Nagios configurations all have to be fully configured with all the clients already in place to be checked and then ported to Check_MK interface (wato) or can the clients be added to Check_MK without being present in the Nagios configurations. This is where my confusion lies and I cant find a concrete answer to this question anywhere. Please help.
Check_MK uses Nagios core for theses tasks:
Manage Check results
Triggering of alarms
Manage planned downtimes
Test host availability
Detect network failures
As you can see at the bottom of this page: http://mathias-kettner.com/checkmk_monitoring_system.html
Check_MK needs both: client side monitoring agent and server side monitoring system.
The server side monitoring system calls the agent of the host and passes the check results to the monitoring core (usually Nagios but there is also an new core just for Check_MK).
What makes Check_MK different from other passive Checks (like NRPE) is that the results for all checks is send to the monitoring system in one package. If you run the agent on a host in a shell it will return something like this:
➜ ~ check_mk_agent
<<<df>>>
/dev/mapper/MyStorage-rootvol ext4 15350768 13206900 1341052 91% /
dev devtmpfs 4022348 0 4022348 0% /dev
plus many more lines ....
So the server side part of Check_MK splits these packages into single checks so that the Nagios core can handle them.
So Check_MK wont replace your existing checks, it doesn't care about them. It will just add more.
You don't necessary need WATO to configure Check_MK. WATO is just an interface for the configuration. Configuration can also be done with plain text files. You should start with WATO and take a look at the configuration it has generated.

REST and C process integration

I have a need to provide restful api support to a linux deamon process that will maintain and manipulate a in-memory table (simple C structure of arrays). This deamon will act as a configuration entity and will relay the table contents to another process on its bootup or during configuration request.
Now in this context i would be happy to obtain the following information:
Would it be good to have an integrated web server or have an independent web server and talk to this daemon. Please note this server would not be required to handle huge loads.
Please suggest some good web servers with good REST support.
If an independent web server then what is the best mechanism for web server to deamon communication.
Please note this would be deployed on a small embedded board running debian.
Well, a possible solution is based on developing a CGI that is executed by any webserver (apache, lighttpd, ...) . This program connects to the main daemon with an IPC mechanism such as sockets, fifos or message queues and after interacting with the daemons returns the desired output to the REST client.
The CGI program can be written in any language, but if you want to write it in C, check this project: it's a CGI program written in C that takes commands for an IP camera. The connection with a main daemon is not implemented, since it was outside the scope of the project. I like it because it has an embedded XML parser and does not require any external library

Google hangout desktop application flow

I am creating a screensharing application that would work in a similar manner like Google Hangout Screen Shares, and I'd like to know how the Google Talk plugin (used for Screen Shares) spawns child processes and uses a dynamic port range.
I am creating a background running application that user will have to install, and which talks with browser like how they describe here, http://www.codeproject.com/Articles/36517/Communicating-from-the-Browser-to-a-Desktop-Applic
But when I look at googleTalkPlugin, which is responsible for google hangout screen sharing, I saw that there are a lot of processes running, and whenever I open a new browser, a new talk plugin for that browser starts, as child service.
Here are some snapshots
and when I noticed the port used by googleTalkPlugin, I came to know its dynamic! If you saw the above link, the Browser Desktop communication is on static port.
I am very interested in knowing, how do I use dynamic port numbers? Also, should I create child process for every browser?
Or something better?
The reason there is a separate child process for each browser is that the Google Talk application is implemented as a browser plugin. Each browser has a Google Talk plugin installed and doesn't know about the other browsers, their plugins or their subprocesses. Each browser will launch the plugins that it has installed and, as Eduard mentioned in the comments, some plugins are started in a separate process. This isn't behavior that is special about Google Talk, it is behavior you will see with most plugins. If you implement your application as a browser plugin you will have the same behavior. If you don't want your application to run as a subprocess of a browser then you will need to write it as a standalone application, not a browser plugin.
If you want to learn more about spawning subprocesses read up on fork(). There are lots of other good resources around the internet on subprocesses.
Your other question is around dynamic port numbers. The easiest way to do this is to bind to port 0 and you will be assigned a random open port by the operating system. You can then use getsockname() to find out what port you ended up with. If you are working with a client/server situation you can have the client do this and then just tell the server which port it is using.

Apache stops processing requests (mod_wsgi?)

At some point my site, running on Apache2 with mod_wsgi just stops processing requests. The connection to server is maintained and client waits for responce, but it never is returned by apache. The server at this time is at 0% CPU, and nothing is processing. I think, apache just sends request to queue and never gets them out of there.
When I perform apache2ctl graceful the problem does not resolve. Only after apache2ctl restart.
My site is a 4 instance wsgi application of Pyramid and 2 instances of Zope 3. It is running normaly and does not have speed problems, that I am aware of.
versions:
Ubuntu 10.04
apache2 2.2.14-5ubuntu8.9
libapache2-mod-wsgi 2.8-2ubuntu1
Sounds like you are using embedded mode to run the multiple applications and you are using third party C extensions that have problems in sub interpreters, resulting in potential deadlock. Else your code is internally deadlocking or blocking on external services and never returning, causing exhaustion of available processes/threads.
For a start, you should look at using daemon mode and delegate each web application to a distinct daemon process group and then forcing each to run in the main interpreter.
See:
http://code.google.com/p/modwsgi/wiki/QuickConfigurationGuide#Delegation_To_Daemon_Process
http://code.google.com/p/modwsgi/wiki/ApplicationIssues#Python_Simplified_GIL_State_API
Otherwise use debugging tips described in:
http://code.google.com/p/modwsgi/wiki/DebuggingTechniques
for getting stack traces about what application is doing.

Resources