IceCast Server Fall back file - file

How do I set up a fall-back file for an IceCast server?

If you happen to be using a very useful toolset named liquidsoap with icecast2 then you ought to be thrilled with the follow example, which will play a directory of sound files, or if there is a live stream broadcast then it will fadeout the playlist, play a "jingle" sound file, then fadeup the live stream. Aside from the silly urls this was pulled from a working environment.
Installing liquidsoap was as painless as apt-get install. If you want to use mp3 then apt-get install lame and then switch to output.icecast.lame(). Create a file with a .liq extension (example.liq), then chmod +x example.liq and you're off to the ./races
#!/usr/bin/liquidsoap
# use the -d flag for daemon mode
set("log.file",false)
set("log.stdout",true)
set("log.level",3)
set("harbor.icy",true)
default = single("say:How are you gentlemen!!
all your base are belong to us.
You are on the way to destruction.
What you say!!
You have no chance to survive make your time!
HA! HA! HA! HA! HA!")
jingles = playlist("/home/edward/micronemez-jinglez")
audio = playlist("/home/edward/micronemez-ogg")
#liveset = mksafe(input.http("http://audio.micronemez.com"))
liveset = strip_blank(input.http("http://f-dt.com"))
liveset = rewrite_metadata([("artist", "FUTURE__DEATH__TOLL"),("title", "LIVE FROM YELLOW_HOUSE")], liveset)
radio = fallback(track_sensitive=false,
[skip_blank(liveset), audio, default])
radio = random(weights=[1,5],[ jingles, radio ])
output.icecast.vorbis(
host="futuredeathtoll.com",port=8000,password="hackme",
genre="Easy Listening",url="http://f-dt.com",
description="pirate radio",mount="micronemez-radio.ogg",
name="FUTURE__DEATH__TOLL ((YELLOW_HOUSE))",radio)
some very useful links:
http://savonet.sourceforge.net/doc-svn/cookbook.html
http://oshyn.com/_blog/General/post/Audio_and_Video_Streaming_with_Liquidsoap_and_Icecast/
http://wiki.sourcefabric.org/display/LS/WikiStart

From the doc:
fallback-mount>/example2.ogg</fallback-mount>
<fallback-override>1</fallback-override>
<fallback-when-full>1</fallback-when-full>`
Please see icecast2_config_file for more explanation scroll to the fallback-mount description.

Related

sagemaker.estimator.Estimator containers eu-west-2

Looking at this, specifically:
containers = {'us-west-2': '433757028032.dkr.ecr.us-west-2.amazonaws.com/xgboost:latest',
'us-east-1': '811284229777.dkr.ecr.us-east-1.amazonaws.com/xgboost:latest',
'us-east-2': '825641698319.dkr.ecr.us-east-2.amazonaws.com/xgboost:latest',
'eu-west-1': '685385470294.dkr.ecr.eu-west-1.amazonaws.com/xgboost:latest'}
sess = sagemaker.Session()
xgb = sagemaker.estimator.Estimator(containers[boto3.Session().region_name],
role,
instance_count=1,
instance_type='ml.m4.xlarge',
output_path='s3://{}/{}/output'.format(bucket, prefix),
sagemaker_session=sess)
where do these entries (contain image names?):
'685385470294.dkr.ecr.eu-west-1.amazonaws.com/xgboost:latest'
come from? I am especially after a eu-west-2 one - hope there is one (-: Thanks!
PS:
It may be that I can just run - at run time?:
from sagemaker import image_uris
image_uris.retrieve(framework='xgboost', region='eu-west-2', version='latest')
I'm not an expert, but those are the container registry path and you can find the full list here.
Yes.
You can use this code snippet:
from sagemaker import image_uris
image_uris.retrieve(framework='xgboost', region='eu-west-2', version='latest')
and it will give you this value:
644912444149.dkr.ecr.eu-west-2.amazonaws.com/xgboost:latest
This is the latest version of the container but it not recommended to run in prod environments.
https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html

Webdriver.IO not able to download file continuously using Webdriver.io

I'm using Webdriver.io to download a file continuously
I tried the following code:
var webdriverio = require('webdriverio');
var options = {
desiredCapabilities: {
browserName: 'chrome'
// waitforTimeout: 1000000
}
};
webdriverio
.remote(options)
.init()
.url('https://xxx')
.setValue('#username', ‘xxx#gmail.com’)
.click('#login-submit')
.pause(1000)
.setValue('#password’,’12345’)
.click('#login-submit')
.getTitle().then(function(title){
console.log('Title was: ' + title);
})
.pause(20000)
.getUrl().then(function(url){
console.log('URL: ' + url);
})
.getTitle().then(function(title){
console.log('Title was: ' + title);
})
.click("a[href='/wiki/admin'] button.iwdh")
.getUrl().then(function (url) {
console.log('URL after settings ' + url);
})
.pause(3000)
.scroll('div.jsAtfH',0,1000)
.click("a[href='/wiki/plugins/servlet/ondemandbackup/admin']")
.pause(10000)
.click('//*[#id="backup"]/a')
//.pause(400000)
.end();
Note: The file size is 7GB and how long it will take to download is depend upon the network so instead of using pause() and timeout() is there any way to do it using webdriver.io or node.js ?
To begin with, your current task (waiting for a HUUUUGE file to download) is not a common use-case when it comes to Webdriver-based automation frameworks, WebdriverIO included. Such frameworks aren't meant to download massive files.
First off, you're confusing the waitforTimeout value with WebdriverIO test timeout. Your test is timing out before the .pause() ends.
Currently you're running your tests via the WebdriverIO test-runner. If you want to increase the test timeout, you have to use a different test framework (Mocha, Jasmine, or Cucumber) and set its timeout value to w/e you find appropriate. Going on, I recommend you use Mocha (coming from an ex-Cucumber guy).
You will have to install Mocha: npm install --save-dev wdio-mocha-framework and run your tests with it. Your test should look like this afterwards:
describe("Your Testsuite", function() {
it("\nYour Testcase\n", function() {
return browser
.url('https://xxx')
.setValue('#username', ‘xxx#gmail.com’)
.click('#login-submit')
// rest of the steps
.scroll('div.jsAtfH',0,1000)
.click("a[href='/wiki/plugins/servlet/ondemandbackup/admin']")
.pause(10000)
.click('//*[#id="backup"]/a')
)}
)}
Your config (wdio.conf.js) should contain the following:
framework: 'mocha',
mochaOpts: {
ui: 'bdd',
timeout: 99999999
}
As a side-note, I tried waiting a very long time (> 30 mins) using the above config and had no issues what-so-ever.
Let me know if this helps. Cheers!
If you click on a download button in your browser and you close your browser then your download will be also closed. If you are owning the website where you click on the download button then try to rewrite your code that you have a download able url. Then you can search for a module or way to download files from http url. If you are not the owner and you cant find a url in the href then you can maybe get the generated download url from the network section at your inspector.
Also I never heard that a browser gets closed after timeout? Maybe it comes from webdriver.io I never let my chrome so long open with webdriver.io
You can try to make a workaround use Intervall each 1 Minute as example and then use a webdriver.io command to don´t timeout.
I know it's very old question but I wanted to answer question from comment (and have no such possibility yet). But I will answer main question too.
When i am giving timeout in "wdio.conf.js" file it's not able to
downlaod file it's closing the session but by giving .pause(2000000)
in webdriver.io code it's able to download file of 7GB. What is the
use of timeout in "wdio.conf.js" file if it's kicking out the session
without downlaod?
So this timeout is related to elements state during the test run. So it "determines how long the instance should wait for that element to reach the state".
https://webdriver.io/docs/timeouts.html - this can help. But to answer the question too:
There are more many timeouts such test deals with. Like iamdanchiv wrote for this you should try using one of automatically supported frameworks like Mocha or Jasmine.
IMO right now the easiest way would to do the quick fresh setup using CLI provided by WDIO:
https://webdriver.io/docs/gettingstarted.html
Where you can just simply pick the additional framework you want to use. I would suggest using Jasmine and Chromedriver for this. Than in your wdio.conf.js you can change this part:
waitforTimeout: 10000,
jasmineNodeOpts: {
// Jasmine default timeout
defaultTimeoutInterval: 60000,
//
},
To something that works for you. Or you can use boilerplate projects from wdio page like this one:
https://webdriver.io/docs/boilerplate.html
But that's not all! Still you will have to create some method or function that checks for the file. So check where do you download the file or make it download where you want to and then create a method that uses some kind of wait:
https://webdriver.io/docs/api/browser/waitUntil.html
browser.waitUntil(condition, { timeout, timeoutMsg, interval })
So you can set the timeout either here or in wdio.conf in 'waitforTimeout'. Inside this method condition you can use node filesystem (https://nodejs.org/api/fs.html) to check the state of the file.
This can be helpful to get through waiting for file condition:
https://blog.kevinlamping.com/downloading-files-using-webdriverio/

Bro is not extracting all files from Pcap file

I wrote this bro script to extract all files from a Pcap file. The problem is that it is not extracting all files. I have a http.cap that I analyzed with Wireshark, and I exported Http objects resulting in to 2 .html files. My bro script is extracting only one of this files.
#load base/files/extract
global hash_number = 100;
event bro_init()
{
#Log::disable_stream(Conn::LOG);
mkdir("extract_files");
}
event file_sniff(f: fa_file, meta: fa_metadata)
{
local ext = "";
if ( meta?$mime_type )
ext = split_string(meta$mime_type, /\//)[1];
local hash = f$seen_bytes % hash_number;
mkdir(fmt("./extract_files/%d", hash));
local file_path = fmt("%d/%s-%s.%s", hash, f$source, f$id, ext);
Files::add_analyzer(f, Files::ANALYZER_EXTRACT, [$extract_filename=file_path]);
}
I called my bro script like this: bro -r http.cap myscript.bro.
I debugged the file_sniff event with print functions and only 1 of the 2 .html files is tracked. It is something wrong with the Bro platform or It is something I am missing?
This is my pcap file.
I also tried with this other pcap file and get the same result. In Wireshark i get some images, js and http files, and bro extracts only 2 images.
I asked to people in freenode chat (channel #Bro) and they told me that those pcaps have connections without the handshake. So, bro do no track packages in connections without a handshake, opposite to Wireshark. This issue is descrived in the change log of version 2.5 of bro as solved, but i donwloaded this beta version, compiled it and get the same result. I dont know if it is because is a beta version. I hope this gets fixed in future versions.

Use variable language specific strings in hugo config file

My goal is to build a multilingual site using hugo. For this I would like to:
not touch the theme file
have a config file which defines the overall structure for all languages (config.toml)
have a "string" file for all languages
So for example, I would have a config.toml file like this:
[params.navigation]
brand = "out-website"
[params.navigation.links]
about = $ABOUT_NAME
services = $SERVICES_NAME
team = $TEAM_NAME
impressum = $IMPRESSUM_NAME
a english language file:
ABOUT_NAME=About
SERVICES_NAME=Services
TEAM_NAME=Team
IMPRESSUM_NAME=Impressum
and a german language file like this:
ABOUT_NAME=Über uns
SERVICES_NAME=Dienste
TEAM_NAME=Mitarbeiter
IMPRESSUM_NAME=Impressum
And then I want to compile the project for english, I do something along the line of:
hugo --theme=... --config=config.toml --config=english.toml
and german:
hugo --theme=... --config=config.toml --config=german.toml
Or in same similar way.
For this I need to use variables in the config.toml that are defined in english.toml or german.toml
My google search so far say, that I cannot use variables in toml.
So is there a different approach with which I could achieve this?
Your approach with variables is not optimal, use the tutorial below.
Multilingual sites are coming as a feature on Hugo 0.16, but I managed to build a multilingual site on current Hugo using this tutorial and some hacks.
The tutorial above requires to "have a separate domain name for each language". I managed to bypass that and to have to sites, one at root (EN), and one in the folder (/LT/).
Here are the steps I used.
Set up reliable build runner, you can use Grunt/Gulp, but I used npm scripts. I hacked npm-build-boilerplate and outsourced rendering from Hugo. Hugo is used only to generate files. This is important, because 1) we will be generating two sites; 2) hacks will require operations on folders, what is easy on npm (I'm not proficient on building custom Grunt/Gulp scripts).
Set up config_en.toml and config_de.toml in root folder as per tutorial.
Here's excerpt from my config_en.toml:
contentdir = "content/en"
publishdir = "public"
Here's excerpt from my config_lt.toml (change it to DE in your case):
contentdir = "content/lt"
publishdir = "public/lt"
Basically, I want my website.com to show EN version, and website.com/lt/ to show LT version. This deviates from tutorial and will require hacks.
Set up translations in /data folder as per tutorial.
Bonus: set up snippet on Dash:
{{ ( index $.Site.Data.translations $.Site.Params.locale ).#cursor }}
Whenever I type "trn", I get the above, what's left is to paste the key from translations file and I get the value in correct language.
Hack part. The blog posts will work fine on /lt/, but not static pages (landing pages). I use this instruction.
Basically, every static page, for example content/lt/duk.md will have slug set:
slug: "lt/duk"
But this slug will result in double URL path, for example lt/lt/duk.
I restore this using my npm scripts task using rsync and manual command to delete redundant folder:
"restorefolders": "rsync -a public/lt/lt/ public/lt/ && rm -rf public/lt/lt/",

cannot set right charset when uploading files

Please be patient and check this problem.
I wrote some simple PHP code for uploading images.
Here is the code (snippets)
<?php
header('Content-Type: text/plain; charset=utf-8');
//set encoding for prepared statements
$dbh->exec("SET NAMES UTF8");
$dbh->query('SET NAMES UTF8');
//check if file is actually an image etc.
//send image to "upload" folder
move_uploaded_file($_FILES["file"]["tmp_name"],"upload/" . $_FILES["file"]["name"]);
//save to the database a string like "upload/myImage.jpg", so I can render it on the site later
$stu = $dbh->prepare("UPDATE multi SET m_place=:name WHERE m_id = :id");
$stu->bindParam(':name', $n, PDO::PARAM_STR);
$n= "upload/".$_FILES["file"]["name"];
$stu->execute();
If the name of the image is in English, everything is fine. If is in Greek, it saved ok in the database , but not in the folder. In the database I see χωρις τιτλο.jpg (which is right) and in the folder χωΟΞ―Ο‚ τίτλο.jpg which is wrong.
I've tried everything and cannot fix this. To get the right titles in the folder.
The encoding of the database (postgreSQL 9.1) is UTF8, the Collation and the Character Type are Greek_Greece.1235. The collaction in table's column which I save the image's title is pg_catalog."C".
I use DreamWeaver. The file that handles the uploads is a php file. Encoding is utf8, Unicode Normalization Form is C and the Include Unicode Signature (BOM) is unchecked.
The default language is Greek in Region and Language in the control panel. (Windows 7 Home Premium)
The encoding of the browser is utf8
I also use Apache 2.2.22. Is it Apache's fault? Or its the php file? Or the database?
I dont know what else to do...What am I missing? Please please please advice
This seems to be a StackOverflow issue and not related to ServerFault. However, you should take a look at this post:
For naming files in UTF-8:
Can a PHP file name (or a dir in its full path) have UTF-8 characters?
For writing files in UTF-8:
How to write file in UTF-8 format?
Turns out , I had not tried everything.
Thanks to this I found the solution.
It has to do with PHP and how "PHP filesystem functions can only handle characters that are in system codepage"
So I used the iconv function.
I changed the code like so
move_uploaded_file($_FILES["file"]["tmp_name"],"upload/" . iconv('UTF-8', 'Windows-1253',$_FILES["file"]["name"]));
For the application using japanese, the code page is cp932
This can be comfirm by using chcp command in commandline.
Sample Code
$fullPath = $uploadFolderPath."\\".iconv("utf-8", "cp932", $fileName);
move_uploaded_file($this->request->data["file_name"]['tmp_name'], $fullPath);

Resources