Service Discovery versus DCOS Overlay Network - mesosphere

I've setup a DCOS 1.8 cluster and am currently familiarizing.
So far I have marathon-lb working like a charm with Jenkins via Host networking. Now I am trying to set things up using Overlay.
I have a couple of test containers, some in the dcos overlay network, some not. So far they can reach each other via IP, which is nice. However when I try to resolv containers on the overlay network using mesos-dns, all it resolves is the host address (not exactly what I am expecting).
So I played around some with marathon to figure it out. What I did was add a discovery block to ipAddress:
{
"volumes": null,
"id": "/mariadb10",
"cmd": null,
"args": null,
"user": null,
"env": {
"MYSQL_ROOT_PASSWORD": "foo"
},
"instances": 1,
"cpus": 1,
"mem": 1024,
"disk": 0,
"gpus": 0,
"executor": null,
"constraints": null,
"fetch": null,
"storeUrls": null,
"backoffSeconds": 1,
"backoffFactor": 1.15,
"maxLaunchDelaySeconds": 3600,
"container": {
"docker": {
"image": "mariadb:10.0",
"forcePullImage": false,
"privileged": false,
"network": "USER"
},
"type": "DOCKER",
"volumes": [
{
"containerPath": "/var/lib/mysql",
"hostPath": "/mnt/foo",
"mode": "RW"
}
]
},
"healthChecks": [
{
"protocol": "TCP",
"gracePeriodSeconds": 30,
"intervalSeconds": 10,
"timeoutSeconds": 10,
"maxConsecutiveFailures": 3,
"port": 3306
}
],
"readinessChecks": null,
"dependencies": null,
"upgradeStrategy": {
"minimumHealthCapacity": 1,
"maximumOverCapacity": 1
},
"labels": null,
"acceptedResourceRoles": null,
"ipAddress": {
"networkName": "dcos",
"discovery": {
"ports": [
{ "number": 3306, "name": "mysql", "protocol": "tcp" }
]
}
},
"residency": null,
"secrets": null,
"taskKillGracePeriodSeconds": null
}
Marathon tells me this is not allowed with "Bridge" or "User" networks. However it did not complain about the following and launched the container:
{
"volumes": null,
"id": "/mariadb10",
"cmd": null,
"args": null,
"user": null,
"env": {
"MYSQL_ROOT_PASSWORD": "foo"
},
"instances": 1,
"cpus": 1,
"mem": 1024,
"disk": 0,
"gpus": 0,
"executor": null,
"constraints": null,
"fetch": null,
"storeUrls": null,
"backoffSeconds": 1,
"backoffFactor": 1.15,
"maxLaunchDelaySeconds": 3600,
"container": {
"docker": {
"image": "mariadb:10.0",
"forcePullImage": false,
"privileged": false,
"network": "USER"
},
"type": "DOCKER",
"volumes": [
{
"containerPath": "/var/lib/mysql",
"hostPath": "/mnt/foo",
"mode": "RW"
}
]
},
"healthChecks": [
{
"protocol": "TCP",
"gracePeriodSeconds": 30,
"intervalSeconds": 10,
"timeoutSeconds": 10,
"maxConsecutiveFailures": 3,
"port": 3306
}
],
"readinessChecks": null,
"dependencies": null,
"upgradeStrategy": {
"minimumHealthCapacity": 1,
"maximumOverCapacity": 1
},
"labels": null,
"acceptedResourceRoles": null,
"ipAddress": {
"networkName": "dcos"
},
"residency": null,
"secrets": null,
"taskKillGracePeriodSeconds": null
}
Funny thing is, it does not use the overlay address anymore, but now listens to the hosts address and also announces the hosts address into the overlay network.
Am I just doing it wrong or does that not work as expected, yet?

So,
I found the solution myself. The easy workaround is to edit /opt/mesosphere/etc/mesos-dns.json. Then change the order of IPSources to list netinfo first.
For more information, you can also check here

Alternatively, you can use taskname.marathon.containerip.dcos.thisdcos.directory. It is documented here: https://docs.mesosphere.com/1.8/administration/overlay-networks/.

Related

How to target and count value with JQ?

From this file :
[
{
"network": "X.X.X.1",
"defaultGateway": "X.X.X.X",
"ipAddressTab": [
{
"foo1": "10.0.0.1",
"foo2": "network",
"status": "reserved",
"foo4": null,
"foo5": null,
"foo6": null,
"foo7": null,
"foo8": null,
"foo9": null,
"foo10": null,
"foo11": null
},
{
"foo1": "10.0.0.2",
"foo2": "network",
"status": "reserved",
"foo4": null,
"foo5": null,
"foo6": null,
"foo7": null,
"foo8": null,
"foo9": null,
"foo10": null,
"foo11": null
},
{
"foo1": "10.0.0.3",
"foo2": "network",
"status": "reserved",
"foo4": null,
"foo5": null,
"foo6": null,
"foo7": null,
"foo8": null,
"foo9": null,
"foo10": null,
"foo11": null
},
{
"foo1": "10.0.0.4",
"status": "available"
},
{
"foo1": "10.0.0.5",
"status": "available"
},
{
"foo1": "10.0.0.6",
"status": "available"
},
{
"foo1": "10.0.0.7",
"status": "available"
}
],
"full": false,
"id": "xxx"
},
{
"network": "X.X.X.2",
"defaultGateway": "X.X.X.X",
"ipAddressTab": [
{
"foo1": "10.0.0.1",
"foo2": "network",
"status": "reserved",
"foo4": null,
"foo5": null,
"foo6": null,
"foo7": null,
"foo8": null,
"foo9": null,
"foo10": null,
"foo11": null
},
{
"foo1": "10.0.0.2",
"foo2": "network",
"status": "reserved",
"foo4": null,
"foo5": null,
"foo6": null,
"foo7": null,
"foo8": null,
"foo9": null,
"foo10": null,
"foo11": null
},
{
"foo1": "10.0.0.3",
"foo2": "network",
"status": "reserved",
"foo4": null,
"foo5": null,
"foo6": null,
"foo7": null,
"foo8": null,
"foo9": null,
"foo10": null,
"foo11": null
},
{
"foo1": "10.0.0.4",
"status": "available"
},
{
"foo1": "10.0.0.5",
"status": "available"
},
{
"foo1": "10.0.0.6",
"status": "available"
},
{
"foo1": "10.0.0.7",
"status": "available"
}
],
"full": false,
"id": "xxx"
}
]
# Just an example, there is more lines in my file
I can keep the informations that I want :
cat myfile | jq 'map({network, full})'
[
{
"network": "X.X.X.1",
"full": false
},
{
"network": "X.X.X.2",
"full": false
}
]
Now I'm looking for a tip to count and display some values. For example, I would like to display the number of reserved, allocated and available like that :
[
{
"network": "X.X.X.1",
"full": false
reserved: 3
available: 4
},
{
"network": "X.X.X.2",
"full": false
reserved: 3
available: 4
}
]
I've look everywhere and I found no good example to do that...
Some one to show me how can I have this output ?
Thanks !
Use reduce to count statuses.
map({network, full} +
reduce .ipAddressTab[].status as $s ({}; .[$s] += 1))
Online demo
You can change {} to {reserved: 0, available: 0} to maintain a consistent order of keys among all the entries.
One way to do this would be to use a function to count the objects of each type
def f($path; $val): $path | map(select(.status == $val)) | length;
map({network, full, reserved: f(.ipAddressTab; "reserved"), available: f(.ipAddressTab; "available")})
jqplay - Demo
The function f takes a path and the status string to be looked-up then gets the length of the objects in the array.
With oguz ismail's suggestion to avoid repetition
def f($val): map(select(.status == $val)) | length;
map({network, full} + (.ipAddressTab | { reserved: f("reserved"), available: f("available")}))
When counting a potentially large number of objects, it's usually better to
use stream-oriented filters so as to avoid constructing arrays. These two are often useful as a pair, though in the present case defining just count/1 by itself would be sufficient:
def sigma(s): reduce s as $x (0; .+$x);
def count(s): sigma(s|1);
To achieve the stated goal, one can now simply write the specification as a program:
map({network,
full,
reserved: count(.ipAddressTab[] | select(.status == "reserved")),
available: count(.ipAddressTab[] | select(.status == "available"))
})
Generalization
Now for a little jq magic -- no references to specific "status" values at all:
def countStatus($s):
{($s): count(.ipAddressTab[] | select(.status == $s))};
def statuses: [.ipAddressTab[].status] | unique;
map( {network, full}
+ ([countStatus(statuses[])] | add) )
total_available
In a comment, a question about showing total_available was asked.
To add a total_available key to each object, you could append the
following to either of the above pipelines:
| {total_available: sigma(.[] | .available)} as $total
| map(. + $total)

Where can I find kiwi tcms parameters information about json-rpc?

I practice to use json-rpc to create test case, and I want to assoicate a test paln with test case, but I don't know the parameter of the plan.
Can anyone give me some suggestions?? Thanks.
My example like this
Test plan ID : 3
Test plan name: test
Using postman request
{
"jsonrpc":"2.0",
"method":"TestCase.create",
"params":{"values":{"summary":"jsonrpctest","case_status":2,"category":2,"priority":1,"text":"20201005test","plan":[3,"test"]}},
"id":1
}
Response
{
"id": 1,
"jsonrpc": "2.0",
"result": {
"id": 191,
"create_date": "2020-10-06 04:44:13",
"is_automated": false,
"script": "",
"arguments": "",
"extra_link": null,
"summary": "jsonrpctest",
"requirement": null,
"notes": "",
"text": "20201005test",
"case_status_id": 2,
"case_status": "CONFIRMED",
"category_id": 2,
"category": "--default--",
"priority_id": 1,
"priority": "P1",
"author_id": 1,
"author": "ardyn",
"default_tester_id": null,
"default_tester": null,
"reviewer_id": null,
"reviewer": null,
"plan": [],
"component": [],
"tag": []
}
}
https://kiwitcms.readthedocs.io/en/latest/api/index.html says
"Server side RPC methods are documented in tcms.rpc.api."
Which is
https://kiwitcms.readthedocs.io/en/latest/modules/tcms.rpc.api.html
And there is the TestPlan.add_case() method:
https://kiwitcms.readthedocs.io/en/latest/modules/tcms.rpc.api.testplan.html#tcms.rpc.api.testplan.add_case

Validate an array of objects in Postman

I am quite new with Postman and not a particularly good programmer. I am testing an API and trying to validate a part of a response that has an array in it looking like this:
"processors": [
{
"name": "ARTPEC-5",
"type": "SOC",
"url": null,
"releaseNotes": null,
"cdnUrl": null,
"cdnReleaseNotes": null
},
{
"name": "SSL",
"type": "SOC",
"url": null,
"releaseNotes": null,
"cdnUrl": null,
"cdnReleaseNotes": null
},
{
"name": "ARTPEC-7",
"type": "SOC",
"url": null,
"releaseNotes": null,
"cdnUrl": null,
"cdnReleaseNotes": null
}
]
Now, I would like to validate that the array comes with the above objects. They may come in any order in the array, so I cannot refer to the objects using index like jsonData.processors[0] and then validate them one by one like that. I need a general validation method. I have tried this that did not work:
pm.test("Check if the response has processors", function () {
pm.expect(jsonData.processors).to.have.members([
{
"name": "ARTPEC-5",
"type": "SOC",
"url": null,
"releaseNotes": null,
"cdnUrl": null,
"cdnReleaseNotes": null
},
{
"name": "SSL",
"type": "SOC",
"url": null,
"releaseNotes": null,
"cdnUrl": null,
"cdnReleaseNotes": null
},
{
"name": "ARTPEC-7",
"type": "SOC",
"url": null,
"releaseNotes": null,
"cdnUrl": null,
"cdnReleaseNotes": null
}]);
});
This approach only gives me the cryptic error message AssertionError: expected [ Array(3) ] to have the same members as [ Array(3) ]
By using _.differenceWith() you will get empty array if there is no difference in the array objects and their properties. That you can assert as -
var _ = require('lodash')
var objects = [{ 'x': 1, 'y': 2 }, { 'x': 1, 'y': 2 }];
var arr = _.differenceWith(objects, [{ 'x': 1, 'y': 2 }], _.isEqual);
console.log(arr);
console.log(objects.length);
// assert that difference should be empty
pm.test("array difference", function () {
pm.expect([]).to.eql(arr)
});

c-lightning public node data on explorers

I'm trying to set my first clightning node with docker-compose using image from https://hub.docker.com/r/elementsproject/lightningd. Currently, my node can connect and open channel with other nodes (and I can open a channel to the node just fine), but it's still not updated (ie. has no information) on most explorers.
I've tried to make my port 9735 open, set the bind-addr as the docker's IP address, even tried to set announce-addr with tor addresses. Nothing works.
The following is current results of getinfo and listconfigs:
getinfo
{
"id": "03db40337c2de299a8fa454fdf89d311615d50a27129d43286696d9e497b2b027a",
"alias": "TestName",
"color": "fff000",
"num_peers": 3,
"num_pending_channels": 0,
"num_active_channels": 3,
"num_inactive_channels": 0,
"address": [
{
"type": "ipv4",
"address": "68.183.195.14",
"port": 9735
}
],
"binding": [
{
"type": "ipv4",
"address": "172.18.0.3",
"port": 9735
}
],
"version": "v0.7.1-906-gf657146",
"blockheight": 601917,
"network": "bitcoin",
"msatoshi_fees_collected": 0,
"fees_collected_msat": "0msat"
}
listconfigs
{
"# version": "v0.7.1-906-gf657146",
"lightning-dir": "/root/.lightning",
"wallet": "sqlite3:///root/.lightning/lightningd.sqlite3",
"plugin": "/usr/local/bin/../libexec/c-lightning/plugins/pay",
"plugin": "/usr/local/bin/../libexec/c-lightning/plugins/autoclean",
"plugin": "/usr/local/bin/../libexec/c-lightning/plugins/fundchannel",
"network": "bitcoin",
"allow-deprecated-apis": true,
"always-use-proxy": false,
"daemon": "false",
"rpc-file": "lightning-rpc",
"rgb": "fff000",
"alias": "HubTest",
"bitcoin-rpcuser": [redacted],
"bitcoin-rpcpassword": [redacted],
"bitcoin-rpcconnect": "bitcoind",
"bitcoin-retry-timeout": 60,
"pid-file": "lightningd-bitcoin.pid",
"ignore-fee-limits": false,
"watchtime-blocks": 144,
"max-locktime-blocks": 2016,
"funding-confirms": 3,
"commit-fee-min": 200,
"commit-fee-max": 2000,
"commit-fee": 500,
"cltv-delta": 14,
"cltv-final": 10,
"commit-time": 10,
"fee-base": 0,
"rescan": 15,
"fee-per-satoshi": 1,
"max-concurrent-htlcs": 30,
"min-capacity-sat": 10000,
"bind-addr": "172.18.0.3:9735",
"announce-addr": "68.183.195.14:9735",
"offline": "false",
"autolisten": true,
"disable-dns": "false",
"enable-autotor-v2-mode": "false",
"encrypted-hsm": false,
"log-level": "DEBUG",
"log-prefix": "lightningd(7):"
}
Is there something wrong with this configuration? Or, is it another issue after all?
I understand that explorers update their node list irregularly, and as far as the node can open channels (and can be connected), everything is fine. but this thing has bugging me for weeks.
updated the docker image and put bind-addr to 0.0.0.0:9375 somehow fixed the problem for some unknown reason.

Elasticsearch mapping not updated after inserting new document via tire (mongoid4, rails4)

Recently I've encountered a strange behaviour regarding elasticsearch with rails4/mongoid4/tire. I managed to do a temporary fix, but I want to know if there is a cleaner solution and where exactly the problem lies (is it an elasticsearch issue?)
Relevant part of my Gemfile
gem 'rails', '4.0.0'
gem "mongoid", github: 'mongoid/mongoid'
gem 'tire'
Elasticsearch version:
"version" : {
"number" : "0.90.2",
"snapshot_build" : false,
"lucene_version" : "4.3.1"
}
My model:
relevant part of my model consists of Ad class:
class Ad
include Mongoid::Document
field :title, type: String
[... other stuff...]
end
and Ad subclasses, one of which is:
class AdInAutomotiveAutomobile < Ad
field :make
field :model
field :body_type
tire.index_name 'ads'
[... other stuff ...]
end
using inheritance doesn't seem to have any importance, but I'm mentioning it just for the record
The problem
Inserting new Ad doesn't update the mapping of 'ads' index
{
"ads": {
"ad_in_automotive_automobile": {
"properties": {
"$oid": {
"type": "string"
}
}
}
}
}
Logs output, trimmed down:
# 2013-08-02 15:40:58:387 [ad_in_automotive_automobile/51fbb6b26f87e9ab1d000001] ("ads")
#
curl -X POST "http://localhost:9200/ads/ad_in_automotive_automobile/51fbb6b26f87e9ab1d000001" -d '{
"_id": {
"$oid": "51fbb6b26f87e9ab1d000001"
},
"active": null,
"body_type": "hatchback",
"c_at": "2013-08-02T13:40:57.647Z",
"category_id": {
"$oid": "51e8020c6f87e9b8e0000001"
},
"color": null,
"description": null,
"engine_displacement": null,
"expire_at": null,
"fuel_type": null,
"locale": null,
"make": "ford",
"meta": {},
"mileage": null,
"model": "focus",
"power": null,
"price": null,
"title": "foo",
"transmission": null,
"u_at": "2013-08-02T13:40:57.647Z",
"year": null,
"category_slug": "automotive-automobile"
}'
# 2013-08-02 15:40:58:388 [201]
#
#
{
"ok": true,
"_index": "ads",
"_type": "ad_in_automotive_automobile",
"_id": "51fbb6b26f87e9ab1d000001",
"_version": 1
}
The solution
Somehow, this:
"_id":{"$oid":"51fbb6b26f87e9ab1d000001"}
is stopping elasticsearch from updating the mapping
So I've 'fixed' this in #to_indexed_json method:
def to_indexed_json
to_json(methods: [:category_slug]).gsub( /\{\"\$oid\"\:(\".{24}\")\}/ ) { $1 }
end
Which results in:
# 2013-08-02 15:50:08:689 [ad_in_automotive_automobile/51fbb8fb6f87e9ab1d000002] ("ads")
#
curl -X POST "http://localhost:9200/ads/ad_in_automotive_automobile/51fbb8fb6f87e9ab1d000002" -d '{
"_id": "51fbb8fb6f87e9ab1d000002",
"active": null,
"body_type": "hatchback",
"c_at": "2013-08-02T13:50:08.593Z",
"category_id": "51e8020c6f87e9b8e0000001",
"color": null,
"description": null,
"engine_displacement": null,
"expire_at": null,
"fuel_type": null,
"locale": null,
"make": "ford",
"meta": {},
"mileage": null,
"model": "focus",
"power": null,
"price": null,
"title": "foo",
"transmission": null,
"u_at": "2013-08-02T13:50:08.593Z",
"year": null,
"category_slug": "automotive-automobile"
}'
# 2013-08-02 15:50:08:690 [201]
#
#
{
"ok": true,
"_index": "ads",
"_type": "ad_in_automotive_automobile",
"_id": "51fbb8fb6f87e9ab1d000002",
"_version": 1
}
And now the mapping is OK:
{
"ads": {
"ad_in_automotive_automobile": {
"properties": {
"$oid": {
"type": "string"
},
"body_type": {
"type": "string"
},
"c_at": {
"type": "date",
"format": "dateOptionalTime"
},
"category_id": {
"type": "string"
},
"category_slug": {
"type": "string"
},
"make": {
"type": "string"
},
"meta": {
"type": "object"
},
"model": {
"type": "string"
},
"title": {
"type": "string"
},
"u_at": {
"type": "date",
"format": "dateOptionalTime"
}
}
}
}
}
The question(s), once again
Why does it happen?
What part of stack is responsible for that?
Can it be fixed in cleaner way?
I'm the guy from the comment, it looks like this is fixed in tire HEAD, look at this issue https://github.com/karmi/tire/issues/775. I havent verified the fix since I monkey patched the class. This is the patch in case you want to go that way:
require "tire"
module Tire
class Index
def get_id_from_document(document)
case
when document.is_a?(Hash)
document[:_id] || document['_id'] || document[:id] || document['id']
when document.respond_to?(:id) && document.id != document.object_id
document.id.to_s # was document.id.as_json
end
end
end
end

Resources