GStreamer 1.0 1.4.5 RTSP Example Server sends 503 Service unavailable - c

I'm trying to make RTSP server based on Gstreamer 1.0 1.4.5 plugin. Source code of the example is taken from here. WITH_TLS and WITH_AUTH flags are not enabled.
I'm compiling it using Visual Studio 2013 Community Edition. The project is building and running successfully. However when I'm trying to play the video using VLC on address rtsp://127.0.0.1:8554/test it says it can't open the MRL.
Recorded traffic (using RawCap) of the communication between compiled server and VLC player is the following:
Request:
OPTIONS rtsp://127.0.0.1:8554/test RTSP/1.0
CSeq: 2
User-Agent: LibVLC/2.2.0 (LIVE555 Streaming Media v2014.07.25)
Response:
RTSP/1.0 200 OK
CSeq: 2
Public: OPTIONS, DESCRIBE, GET_PARAMETER, PAUSE, PLAY, SETUP, SET_PARAMETER, TEARDOWN
Server: GStreamer RTSP server
Date: Thu, 09 Apr 2015 03:35:30 GMT
Request:
DESCRIBE rtsp://127.0.0.1:8554/test RTSP/1.0
CSeq: 3
User-Agent: LibVLC/2.2.0 (LIVE555 Streaming Media v2014.07.25)
Accept: application/sdp
Response:
RTSP/1.0 503 Service Unavailable
CSeq: 3
Server: GStreamer RTSP server
Date: Thu, 09 Apr 2015 03:35:30 GMT
The RTSP server have no errors or warnings in either IDE or command promt window.
Does anyone have any idea why is this happening? Any help would be very appreciated!

Related

hawkBit swupdate Suricatta: HTTP/1.1 401 Unauthorized

I want to set up hawkBit (running on server) and swupdate (running on multiple clients - Linux OS) to perform OS/Software update in Suricatta mode.
1/ Follow up my post on hawkBit community, I've succeeded to run hawkBit in my server as below:
Exported to external link: http://:
Enabled MariaDB
Enabled Gateway Token Authentication (in hawkBit system configuration)
Created a software module
Uploaded an artifact
Created a distribution set
Assigned the software module to the distribution set
Create Targets (in Deployment Mangement UI) with Targets ID is "dev01"
Created a Rollout
Created a Target Filter
2/ I've succeeded to build/execute swupdate as SWupdate guideline
Enabled Suricatta daemon mode
Run swupdate: /usr/bin/swupdate -v -k /etc/public.pem -u '-t DEFAULT -u http://<domain>:<port> -i dev01'
I'm pretty sure this command isn't correct, output log as below:
* Trying <ip address>...
* TCP_NODELAY set
* Connected to <domain> (<ip address>) port <port> (#0)
> GET /DEFAULT/controller/v1/10 HTTP/1.1
Host: <domain>:<port>
User-Agent: libcurl-agent/1.0
Content-Type: application/json
Accept: application/json
charsets: utf-8
< HTTP/1.1 401 Unauthorized
< Date: Sun, 16 May 2021 02:43:40 GMT
< X-Content-Type-Options: nosniff
< X-XSS-Protection: 1; mode=block
< Cache-Control: no-cache, no-store, max-age=0, must-revalidate
< Pragma: no-cache
< Expires: 0
< X-Frame-Options: DENY
< Content-Length: 0
<
* Connection #0 to host <domain> left intact
[TRACE] : SWUPDATE running : [channel_log_effective_url] : Channel's effective URL resolved to http://<domain>:<port>/DEFAULT/controller/v1/dev01
[ERROR] : SWUPDATE failed [0] ERROR corelib/channel_curl.c : channel_get : 1109 : Channel operation returned HTTP error code 401.
[DEBUG] : SWUPDATE running : [suricatta_wait] : Sleeping for 45 seconds.
As per a suggestion from #laverman on Gitter:
You can use Gateway token in the Auth header of the request, e.g. “Authorization : GatewayToken a56cacb7290a8d8a96a2f149ab2f23d1”
but I don't know how the client sends this request (it should be sent by swupdate, right?)
3/ Follow up these instructions from Tutorial # EclipseCon Europe 2019, it guides me to send the request to provision multiple clients from hawkBit Device Simulator. And the problem is how to apply this to real devices.
Another confusion is: when creating new Software Module, Distribution on hawkBit UI, I can't find the ID of these, but creating by send request as Tutorial, I can see the IDs in the response.
So my questions are:
1/ Are my hawkBit setup steps correct?
2/ How can I configure/run swupdate (on clients) to perform the update: poll for new software, download, update, report status, ...
If my description isn't clear enough, please tell me.
Thanks
happy to see that you're trying out Hawkbit for your solution!
I have a few remarks:
The suricatta parameter for GatewayToken is -g, and -k for TargetToken respectively.
The -g <GATEWAY_TOKEN> needs to be set inside of the quotation marks
see SwUpdate Documentation
Example: /usr/bin/swupdate -v -u '-t DEFAULT -u http://<domain>:<port> -i dev01 -g 76430e1830c56f2ea656c9bbc88834a3'
For the GatewayToken Authentication, you need to provide the generated token in System Config view, it is a generated hashcode that looks similar to this example here
You can also authenticate each device/client separately using their own TargetToken.
You can find more information in the Hawkbit documentation

One Click Install - Dgraph - Gru

I installed dgraph gru for interviews
go get github.com/dgraph-io/gru
cd $GOPATH/src/github.com/dgraph-io/gru
git checkout develop
go build . && ./gru -user=admin -pass=pass -secret=0a45e5eGseF41o0719PJ39KljMK4F4v2
docker run -it -p 127.0.0.1:8088:8080 -p 127.0.0.1:9080:9080 -v ~/dgraph:/dgraph --name dgraph dgraph/dgraph:v0.7.5 dgraph --bindall=true
I'm getting below error when i try to create quiz or questions
Aug 09 10:14:23 gru[16999]: [negroni] Completed 500 Internal Server Error in 30.001305978s
Aug 09 10:14:24 gru[16999]: [negroni] Completed 500 Internal Server Error in 30.000762875s
Aug 09 10:19:40 gru[16999]: Error while rejecting candidates: Couldn't get response from Dgraph: Post http://localhost:8088/query: dial tcp 127.0.0.1:8088: i/o timeout[negroni] Started POST /api/admin/add-question
Aug 09 10:20:10 gru[16999]: [negroni] Completed 500 Internal Server Error in 30.001419475s
Aug 09 10:20:17 gru[16999]: [negroni] Started POST /api/admin/get-all-questions
Aug 09 10:20:31 gru[16999]: [negroni] Started GET /api/admin/get-all-tags
Aug 09 10:20:43 gru[16999]: [negroni] Started GET /api/admin/get-all-tags
Aug 09 10:20:47 gru[16999]: [negroni] Completed 500 Internal Server Error in 30.000821271s
Aug 09 10:21:01 gru[16999]: [negroni] Completed 500 Internal Server Error in 30.000790588s
Aug 09 10:21:13 gru[16999]: [negroni] Completed 500 Internal Server Error in 30.000748794s
Aug 09 11:12:24 gru[16999]: Error while rejecting candidates: Couldn't get response from Dgraph: Post http://localhost:8088/query: dial tcp 127.0.0.1:8088: i/o timeoutError while rejecting candidates: Couldn't get response from Dgraph: Post http://localhost:8088/query: dial tcp 127.0.0.1:8088: i/o timeoutError while rejecting candidates: Couldn't get response from Dgraph: Post http://localhost:8088/query: dial tcp 127.0.0.1:8088: i/o timeoutError while rejecting candidates: Couldn't get response from Dgraph: Post http://localhost:8088/query: dial tcp 127.0.0.1:8088: i/o timeoutError while rejecting candidates: Couldn't get response from Dgraph: Post http://localhost:8088/query: dial tcp 127.0.0.1:8088: i/o timeout[negroni] Started POST /api/admin/get-all-questions
Aug 09 11:12:54 gru[16999]: [negroni] Completed 500 Internal Server Error in 30.000807257s
Aug 09 11:13:10 gru[16999]: [negroni] Started GET /api/admin/get-all-tags
Aug 09 11:13:41 gru[16999]: [negroni] Completed 500 Internal Server Error in 30.000734698s
Aug 09 11:16:56 gru[16999]: Error while rejecting candidates: Couldn't get response from Dgraph: Post http://localhost:8088/query: dial tcp 127.0.0.1:8088: i/o timeout[negroni] Started POST /api/admin/add-question
Aug 09 11:17:26 gru[16999]: [negroni] Completed 500 Internal Server Error in 30.000777429s
I tried with different versions of dgraph database.
Is there any scripts or docker to install it on the fly.
From the logs I can see that it is not able to connect to Dgraph. Thats because docker is exposing port 8088 whereas Gru server expects Dgraph to be running on 8080. You can run Dgraph like
docker run -it -p 127.0.0.1:8080:8080 -v ~/dgraph:/dgraph dgraph/dgraph:v0.7.5 dgraph --bindall=true
You also have to run Gru server and caddy as mentioned in the README. Now that I think about it the UI doesn't need to be run separately from the Gru web server. I can try adding a one-step quick install guide over the weekend.

ErrorMisc "Unsucessful HTTP code: 400" on cabal update

Having installed Haskell Platform 2013.2.0.0, and using Win7 cmd:
When running the command "cabal update"
I got the following output:
Downloading the latest package list from hackage.haskell.org
cabal: Failed to download http://hackage.haskell.org/packages/archive/00-index.tar.gz
: ErrorMisc "Unsucessful HTTP code: 400"
To confirm that the link was working, I visited it on my web browser.
At this stage, I'm sorta stuck as to how to resolve the issue.
Also, I'm not sure whether this would help or not, but here's what I get when I run "cabal update -v3"
Downloading the latest package list from hackage.haskell.org
Sending:
GET /packages/archive/00-index.tar.gz HTTP/1.1
Host: hackage.haskell.org
User-Agent: cabal-install/1.16.0.2
Creating new connection to hackage.haskell.org
Received:
HTTP/1.1 400 Bad Request
Server: nginx/1.6.0
Date: Thu, 31 Jul 2014 16:24:24 GMT
Content-Type: text/html
Content-Length: 172
Connection: close
cabal: Failed to download
http://hackage.haskell.org/packages/archive/00-index.tar.gz : ErrorMisc
"Unsucessful HTTP code: 400"
I could solve this problem in my machine by disabling the antivirus and firewall, hope it will help.

Google App Engine not generating 304, instead generating 200 always

Google App engine always generates 200 for the url /test.js and
test.js is not a static resource, but a url pattern for dynamically generated content. The content will expire after N hours and a fresh content will be generated.
I've tried with Last-Modified, ETag and Cache-Control. None seems to work.
Request
Request URL:http://localhost:8081/test.js
Request Method:GET
Status Code: 200 OK
Accept:*/*
Accept-Encoding:gzip,deflate,sdch
Accept-Language:en-US,en;q=0.8
Cache-Control:max-age=0
Connection:keep-alive
Host:localhost:8081
If-Modified-Since:Fri, 18 Oct 2013 14:10:39 GMT
If-None-Match:"1B2M2Y8AsgTpgAmY7PhCfg"
Referer:http://localhost:8080/
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/30.0.1599.101 Safari/537.36
Response Headers
cache-control:public, max-age=360000
Content-Length:2
content-type:application/script; charset=utf-8
Date:Fri, 18 Oct 2013 14:10:40 GMT
etag:"1B2M2Y8AsgTpgAmY7PhCfg"
expires:Tue, 22 Oct 2013 18:10:40 GMT
last-modified:Fri, 18 Oct 2013 14:10:40 GMT
Server:Development/2.0
Your request has Cache-Control:max-age=0, so any intermediate caches (incl. the browser-cache) won't serve cached content. This is likely a result of a setting in your browser.
For requests with revalidate headers (If-X), you need to have the logic in place to act properly. To save bandwidth, this is pretty simple with
webob (which is used by webapp2 and other frameworks) and the conditional-response setting. Avoiding computation as well depends a little more on what you're doing, but webob helps here too.
Redbot is a really useful tool for checking HTTP cache behaviour.
Refer to this for HTTP status:
http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html
200 is just the correct HTTP OK status, that doesn't have any interpretation on whether the resource is static or not. (Try any dynamic web page out there like e.g. facebook) and you will notice it's 200. Having a response of 200 is perfectly normal
for 304 it's "Not Modified" - As mentioned in w3 "The 304 response MUST NOT contain a message-body". This is not what you want.
In your case your concern should be to set the correct expiry time for these http header (do it within your program code), so that the browser always request for a fresh copy of content after the expiry time (e.g. after 1 hour):
cache-control:public, max-age=3600
expires:Tue, 20 Oct 2013 18:10:40 GMT

zip direct link download not working in IE7 and IE8

Zip file direct link download not working in IE7 and IE8
Examlple: http://beta-ffconeworld.fairfactories.org/Uploads/documents/docfiles/122_test.zip
$ curl -I http://beta-ffconeworld.fairfactories.org/Uploads/documents/docfiles/122_test.zip
HTTP/1.1 200 OK
Date: Fri, 15 Jul 2011 10:58:46 GMT
Server: Apache/2.2.16 (Amazon)
Last-Modified: Fri, 15 Jul 2011 10:09:11 GMT
ETag: "7cc4-8565-4a818d74be4db"
Accept-Ranges: bytes
Content-Length: 34149
Vary: Accept-Encoding,User-Agent
Connection: close
Content-Type: application/zip
I ran into a similar issue once and solved it by disabling gzip compression in apache for the particular file extension or directory.
In my case apache was trying to compress a file that was already compressed thus corrupting it. We added
SetEnvIfNoCase Request_URI \.(?:zip)$ no-gzip dont-vary
into httpd/conf/extra/httpd-deflate.conf and all was well.
Works fine on my machine.
Check your security settings. In IE7, this is Tools -> Internet Options -> Security -> Custom Level... in the list it's possible to disable file downloads, or enable them to download without prompt.

Resources