I'm trying to redirect from my domain to my localhost, the issue is that I have dinamic Ip address so it changes periodically.
Is there any app that saves my ip into my online mysql database? (so then I can set the redirect using php)
If you know any other solution it will be welcome! :)
Thanks!
PD: I've tried no-ip but I don't want to pay for use my own domain.
If you are on Windows you can set an scheduled task to run on startup, but would be better to make run periodicaly because your ip address can change even without a restart.
Make The scheduled task run a script, can be php or ruby or phyton, they all have mysql adapters and can be run without a webserver, and in a bat script you can pass the ip address as an argument and The script send it to mysql.
If was Linux you could do a bash script.
Even dinamic ips can use dns servers, you should look into it too.
Related
I wonder if we can start or stop a website on IIS-8.5 running on a remote server which is on a different domain (I can provide the credentials if needed in batch file).
I know if it is a local site running on our machine we can use batch file like,
#echo off
appcmd start sites "local.xyz.com"
pause
I appreciate your help on this.
Thanks in advance.
Several options. I just list two below,
If you can use PowerShell remoting, then you can easily access appcmd or equivalent commands on the remote machine.
If you can deploy IIS Administration API, you get even more possibilities.
So I'm trying to transfer files to a remote computer on an SSH system. 'I've used the sftp command, used lls to confirm the presence of the file in the local computer, and then implemented the put filename command. However, I receive the same result each time:
stat filename: No such file or directory
I just don't know what's going wrong! Any help or troubleshooting tips would be appreciated.
If you're currently using Windows you can download winscp and use that to transfer files. It has a nice graphic interface that is easy to interact with
Well, supposing that you are on a Linux/Unix environment, you could use scp. Typically, the syntax for an scp command would be like this:
$ scp foobar.txt your_username#remotehost.net:/some/remote/directory
The above command copies the file foobar.txt which resides in the local computer, to a specific directory in the remote machine, using a username (you will be asked for a password later).
The sftp command line client uses the ssh transport and will tunnel your connections using your key. So if you have ssh access, you should also have sftp access. This is a secure option for people who are more comfortable with ftp. Most GUI ftp clients should also support sftp.
I was facing also in this issue when trying to upload files from the local to the remote server. I did commands well and clean but the mistake I was making was that: I've logged into the remote server with ssh and then login with sftp. In that way, sftp will consider that your remote server is the local (as I logged in first to this via ssh) when using the command below:
put /c/path/to/file.txt
So, the thing to do is to login directly to the server via sftp and putting your local files in there.
I had everything set up and working for years, this script that did price updates on a customers website via their local SAP database.
They just changed ISP's and thus, their IP address has changed. I've made all necessary changes to the outbound firewall and freetds.conf (odbc host config file) on our web server. However when I run the script from the command line (Its never run via apache, only via cron) it still attempts its connection to the OLD IP, which I have verified via the use of TCPDUMP.
Nothing I have tried gets PHP to see the new "host ip" in the freetds.conf file.
Is there any way to get this to work without rebooting the server? Is there some way to use the freebcp tool to force a reload of the config file?
There has to be a way to do this without rebooting. This is linux after all.
It turns out that I was just editing the wrong config file. Everything worked immediately once I updated the correct file. Reboots or restarts required.
What is the simplest way to schedule a batch file to run on a remote machine using Hudson (latest and greatest version)? I was exploring the master slave setup. I created a dumb slave but I am not sure what the parameters should be so that I can trigger the batch file in the remote slave machine.
Basically, I am trying to run 2 different batch files on two different remote machines sequentially, triggered from my machine (the master). The Step by step guide on the Hudson website is a dead link. There are similar questions posted on SO but it does not quite work for me when I use the parameters they mention.
If anyone has done something similar please suggest ways to make this work.
(I know how to set up jobs, and add a step to run a batch file etc what I am having trouble configuring is doing this on a remote machine using hudson in built features)
UPDATE
Thank you all for the suggestions. Quick update on this:
What I wanted to get done is partially working, below are the steps followed to get to it -
Created new Node from Manage Nodes -> New Node -> set # of Executors as 1, Remote FS root set as '/var/hudson', set Launch method as using JNLP, set slavename and saved.
Once slave was set up (from master machine), I logged into the Slave physical machine, I downloaded the _slave.jar from http://masterserver:port/jnlpJars/slave.jar, and ran the following from command line at the download location -> java -jar _slave.jar -jnlpUrl http://masterserver:port/computer/slavename/slave-agent.jnlp. The connection was made successfully.
Checked 'Restrict where this project can be run' in the Master job configuration, and set paramater as slavename.
Checked "Add Build Step" for adding my batch job script
What I am still missing now is a way to connect to 2 slaves from one job in sequence, is that possible?
It is fairly easy and straight forward. Lets assume you already have a slave running. Then you configure the job as if you are locally on the target box. The setting for Restrict where this project can be run needs to be the node that you want to on. This is all for the job configuration.
For the slave configuration read the following pages.
Installing Hudson as a Windows service
Distributed builds
On windows I prefer to run the slave as a service and let the remote machine manage the start up and shut down of the slave. The only disadvantage with this is, you need to upgrade the client every time you update the server Just get the new client.jar from the server, after the upgrade and put it on the slave. Then restart the slave and you are done.
I had troubles using the install as a service option for the slave even though I did it as a local administrator. I used then srvany to wrap the jar into a service. Here is a blog about it. The command that you need to wrap, you will get from your Hudson server from the slave page. For all of this to work, you should set up the slave management as jnlp.
If you have an ssh server on your target machine, you can use the ssl slave settings. These work for me like a charm. I use them with my unix slaves. So far the ssl option with unix is less of an hassle, than the windows service clients.
I had some similar trouble with slave setup and wrote up this blog post - I was running on Linux rather than Windows, but hopefully this will help.
I dont know about how to use built-in hudson features for this job - but in one of my project builds, i run a batch file that in turn uses PSTools
to run the job on a remote server. I found PS tools extremely easy to use - download, unpack and run the command with the right parameters, hence opted to use this.
I own a website with 20 GB data on it Now I decided to change the Hosting compnay .
I'm Moving to Russian VPS so is there a way to transfer the contents of my website to the Russian VPS without uploading them again .
Is there a service that does this.
I heard that there is a way to do this using shell access (BUT what is shell access and how it works)
thanx in advance guys
You can log in to one of your old host using an SSH connection, then connect from there to your new host, again using an SSH connection, and then upload all files from your first server to the second. For databases, do a data dump on your first server, and through the SSH connection, run the data dump against a database on your new server.
Depending on the hosts, how you connect via SSH will differ, but there should be instruction available from the providers. If you can't find the directions, just e-mail the provider's support and ask.
If you have access to the server itself, you can ftp into your old site from the new server, and download all the data from the new server, without having to download to a personal computer.
If your current provider supports FTP, you can issue FTP commands from your new VPS to the current FTP site. If your data is in DB - backup and transfer backup.
You can't avoid 40Gb of transfer (20 out from old site and 20 in new one).
This is one of the reasons that makes Amazon S3 a good thing.