How to ping my website in Azure? - winforms

I wrote a Windows application to ping my website each 5 minutes to control whether it is UP or DOWN at the moment. it was working in our network and in our test server but our live environment is in Azure and it doesn't allow me to ping that web site.
What can I use instead of Ping to control my website in Azure? and how?

Use HttpWebRequest and HttpWebResponse and remember to Dispose the objects you create. In particular the response stream of HttpWebResponse.
HttpWebResponse res = req.GetResponse() as HttpWebResponse;
using (Stream respStream = res.GetResponseStream())
{
respStream.ReadByte();
respStream.Close();
}

I would suggest you take it outside of your own infrastructure. Mainly because ping'ing something can give you very limited information anyway. What if the role is up and running but your website is actually crashing with an exception? What if you have multiple instances of your role, one of which is down, and your ping request doesn't say anything is wrong?
Use something like Pingdom:
http://www.pingdom.com
It'll allow you to do a few things, that are probably not possible (or not easy) to do with your own 'inhouse' solution. Such as transaction monitoring (similar to very basic UI tests, have a user log in and click around for instance), multiple ways of alerting (even Twitter alerts) and multiple different request locations (it may work in the UK, but does it work when accessing it from France?).
Services like this were created for this sole purpose. You need to define when you believe your website to "be up" or "be down". Is it when it responds to ping's? Is it when your login page displays OK? Is it when your admin page displays OK?

I used HttpWebRequest and HttpWebResponse instead of Ping, but after 3 times the program freezes for some reason. I put everything in a try...catch block but it does not throw any exception either, only freezes.
try
{
var myRequest = (HttpWebRequest)WebRequest.Create(url);
var response = (HttpWebResponse)myRequest.GetResponse(); //After third time it freezes here
if (response.StatusCode == HttpStatusCode.OK)
{
labelResult.Text += TxtIPAddress.Text + " is Available " + " " + System.DateTime.Now.ToString() + " " + Environment.NewLine;
}
else
{
labelResult.Text += TxtIPAddress.Text + " is Unavailable " + System.DateTime.Now.ToString() + " " + Environment.NewLine;
}
}
catch (Exception ex)
{
labelResult.Text += TxtIPAddress.Text + " is Unavailable " + System.DateTime.Now.ToString() + " " + Environment.NewLine;
}

Another monitoring tool which is more complex is PRTG
It provides a number of monitors and is free for so many sensors you wish to monitor.
This way you can monitor not just a sites existence but whether a web service for a specific call returns, a SQL query. The possibilities are almost endless.

Related

leveraging Apache solr streaming capability to send millions of records as part of REST API

My problem statement goes like this
" I want to leverage apache solr 8.6.1 streaming capability to send millions of records as part of spring boot REST API call. I cannot directly call solr end points due to security restrictions and also some business logic in place. So I have written the code through which I am able to read the data as stream and push it to spring boot outputstream."
When I am making the API call everytime It goes through the following code
StreamFactory factory = new StreamFactory().withCollectionZkHost(COLLECTION_NAME,ZK_HOST);
SolrClientCache solrClientCache = new SolrClientCache(httpClient);
StreamContext streamContext = new StreamContext();
streamContext.setSolrClientCache(solrClientCache);
String expressionStr = String.format(SEARCH_EXPRESSION,COLLECTION_NAME);
StreamExpression expression = StreamExpressionParser.parse(expressionStr);
TupleStream stream;
try {
stream = new CloudSolrStream(expression, factory);
stream.setStreamContext(streamContext);
stream.open();
Tuple tuple = stream.read();
int count = 0;
while (!tuple.EOF) {
String jsonStr = ++count + " " + tuple.jsonStr() + "\r\n";
outputStream.write(jsonStr.getBytes());
outputStream.flush();
tuple = stream.read();
}
stream.close();
} catch (IOException e) {
e.printStackTrace();
}
and it tries to connect to zookeeper at stream.open and it is taking some time.
is it possible to optimize this code so that everytime it doesn't have to connect to zookeeper and we can keep it ready before hand only.
because it is a stream that's why we have to open and close the stream with every call.
also how it will behave in the multiuser scenario.
Can anyone throw some light on it and how we can optimize it further

Correlation Failed, Remote Login. AspNet Core Identity Server

Trying to gain some basic understanding of how this process works as I am receiveing the Correlation failed error. Let me first begin my describing the issue I'm encountering...
QAT is not working properly and is configured as follows:
I have an Identity Server running behind a load balancer for QAT.
All requests sent to the load balancer are https.
The traffic being forwarded to each application server (2 seperate servers in this case) is http.
The Netscaler is adding all necessary X-Forwarded items to the header.
I have another application that also sits behind the load balancer for QAT.
There are 2 seperate servers hosting this application which the netscaler will forward the traffic to.
This application is configured to use the X-Forwarded info from the netscaler.
It is designed to authenticate using the above-mentioned Identity Server.
My issue is that I end up with a never ending loop between the second application and the Identity Server when I deploy to QAT. This is strange to me as my SYS environment works perfectly. My sys environment has a seperate instance of Identity Server and the second Application mentioned (except that there is only a single instance of each application being forwarded to). This also goes through the netscaler and does all the X-Forwarded magic mentioned earlier.
In both situations the setup is identical. The only difference is that QAT has multiple servers hosting each app and SYS only has 1 server hosting each app.
My question is why would this behave differently?
Why would this work in sys but not in qa?
I think at this point we can rule out the callback path, cookie settings, etc... b/c it work in SYS.
Could it be that I need to implement some sore of Data Protection Key middleware in both the identity server and the other applciation? On that note, I really dont understand the Data Protection Keys. Would both the identity server and the seperate application need to store their Keys in the same location (whether that be in the database or filesystem) in order to be able to decrypt the information stored in the cookie...
Any help is greatly appreciated.
It was definitely the Data Protection Keys that were the issue. My solution was simple. Save the encryption key as part of the deployment process, Create an IXmlRepository, and then add that to the startup. Easy Peasy.
using Microsoft.AspNetCore.DataProtection.Repositories;
using System;
using System.Collections.Generic;
using System.Text;
using System.Xml.Linq;
namespace Myapp.Encryption.Repositories
{
public class EncryptionRepository : IXmlRepository
{
private String Key { get; set; }
public EncryptionRepository()
{
var year = Convert.ToString(DateTime.Now.Year + 2);
var key = "<key id=\"983440f7-626b-46e4-8bfa-7c3d6d9d4619\" version=\"1\">" +
" <creationDate>2019-11-13T17:42:58.889085Z</creationDate>" +
" <activationDate>2019-11-13T17:42:58.3843715Z</activationDate>" +
" <expirationDate>" + year + "-02-11T17:42:58.3843715Z</expirationDate>" +
" <descriptor deserializerType=\"Microsoft.AspNetCore.DataProtection.AuthenticatedEncryption.ConfigurationModel.AuthenticatedEncryptorDescriptorDeserializer, Microsoft.AspNetCore.DataProtection, Version=2.2.0.0, Culture=neutral, PublicKeyToken=adb9793829ddae60\">" +
" <descriptor>" +
" <encryption algorithm=\"AES_256_CBC\" />" +
" <validation algorithm=\"HMACSHA256\" />" +
" <masterKey p4:requiresEncryption=\"true\" xmlns:p4=\"http://schemas.asp.net/2015/03/dataProtection\">" +
" <value>{{Your Encryption Key }}</value>" +
" </masterKey>" +
" </descriptor>" +
" </descriptor>" +
"</key>";
Key = key;
}
public IReadOnlyCollection<XElement> GetAllElements()
{
var collection = new List<XElement>();
collection.Add(XElement.Parse(Key));
return collection;
}
public void StoreElement(XElement element, String friendlyName)
{
// Not required as key is hard coded
}
}
}
services.AddSingleton<IXmlRepository, EncryptionRepository>();
services.AddDataProtection().AddKeyManagementOptions(a => a.XmlRepository = (services.BuildServiceProvider()).GetService<IXmlRepository>());

Should I Be Using Async Calls?

I have a c# application which reads a table of roughly 1500 site url's of clients who have been with the company since we started. Basically, I am running whois queries on these url's and seeing if they are still a client or not. The application works but it takes roughly an hour to complete. Would I be better off using async whois queries and how much time roughly could I save.
Here is a sample whois query block of code that I am using.
Also if anyone has any tips on how to improve this code or run async commands could ye please help me out as I'm only an intern. Thanks
string whoisServer = "whois.markmonitor.com";
string data;
try
{
TcpClient objTCPC = new TcpClient(whoisServer, 43);
string strDomain = domainName + "\r\n";
byte[] arrDomain = Encoding.ASCII.GetBytes(strDomain);
Stream objStream = objTCPC.GetStream();
objStream.Write(arrDomain, 0, strDomain.Length);
StreamReader objSR = new StreamReader(objTCPC.GetStream(),
Encoding.ASCII);
//return objSR.ReadLine();
//return (Regex.Replace(objSR.ReadToEnd(),"\n","<br>")).ToString();
using (StreamReader reader = new StreamReader(objTCPC.GetStream(), Encoding.ASCII))
{
data = (reader.ReadToEnd());
}
//test.Add(objSR.ReadLine());
objTCPC.Close();
}
catch
{
data = "Not Found";
}
return data;
Well, the short answer is certainly yes.
Since you are making multiple, completely independent lookups, you have everything to gain by running them in parallel, asynchronously.
There are several ways to do this. The options depend on what version of .net you're in.
As you would guess, there are many examples.
Check these out right here on SO.
Avaliable parallel technologies in .Net
Multi threaded file processing with .NET
When to use a Parallel.ForEach loop instead of a regular foreach?

Check MySql Connection is Opened Or Not in Visual C++

Sorry, If u filling bored. I have searched on several search engines but could not got any result. Anyway I am working in an App which database is mysql. Now I have created a database wrapper class and want to check if the connection is already opened. Could u help me?
String^ constring = L"datasource=localhost;port=3306;username=root;password=pass;database=eps;";
String^ my_query = L"select id from eps_users where usr = '" + this->user_name->Text + "' and psw = md5('" + this->pass_word->Text + "');";
MySqlConnection^ conDatabase = gcnew MySqlConnection(constring);
MySqlCommand^ cmd = gcnew MySqlCommand(my_query, conDatabase);
MySqlDataReader^ myreader;
try
{
conDatabase->Open();
myreader = cmd->ExecuteReader();
int count = 0;
while (myreader->Read())
{
count = count + 1;
}
if (count == 1){
MessageBox::Show("Username And Password is correct.", "Success", MessageBoxButtons::OK,
MessageBoxIcon::Information);
this->Hide();
Form2^ f2 = gcnew Form2(constring);
f2->ShowDialog();
}
else{
MessageBox::Show("Username And Password is not correct.", "Error", MessageBoxButtons::OK,
MessageBoxIcon::Error);
// <del>
this->Hide();
Form2^ f2 = gcnew Form2(constring);
f2->ShowDialog();
// </del>
}
}
catch (Exception^ ex)
{
MessageBox::Show(ex->Message);
}
conDatabase->Close();
I need to check if( conDatabase->HasBeenOpened()) { conDatabase->Open();}
The MySqlConnection type implements a feature called connection pooling that relies on the garbage collector to help recycle connections to your database, such that the best practice with regards to connection objects is to create a brand new object for most calls to the database, so that the garbage collector can correctly recycle the old ones. The process goes like this:
Create a new connection
Open the connection
Use the connection for one query/transaction
Dispose the connection
Where all four steps live within a single try/catch/finally block. (Also, the dispose step needs to happen inside the finally block!) Because you generally start with a brand new connection object, there's not typically a need to check if it's open first: you know it's closed. You also don't need to check the state after calling Open(): the method will block until it's finished, and throw an exception if it fails.
However, if you really are in one of the (rare!) situations where it's a good idea to preserve the connection for an extended period, you can check the state like this:
if( conDatabase->State == ConnectionState::Open)
Now, there is one other issue in that code I'd like to talk about. The issue comes down to this: what do you think will happen if I put the following into your username text box:
';DROP Table eps_users;--
If you think that it will try to execute that DROP statement in your database, you're right: it will! More subtle and damaging queries are possible, as well. This is a huge issue: there are bots that run full time crawling web sites looking for ways to abuse this, and even an corporate internal desktop apps will get caught from time to time. To fix this, you need to use Parameterized Queries for every instance where include user-provided data as part of your sql statement.
A quick example might look like this:
String^ my_query = L"select id from eps_users where usr = #userID;";
MySqlCommand^ cmd = gcnew MySqlCommand(my_query, conDatabase);
cmd->Parameters->AddWithValue(L"#userID", this->user_name->Text);

Provide a database packaged with the .APK file or host it separately on a website?

Here is some background about my app:
I am developing an Android app that will display a random quote or verse to the user. For this I am using an SQLite database. The size of the DB would be approximately 5K to 10K records, possibly increasing to upto 1M in later versions as new quotes and verses are added. Thus the user would need to update the DB as and when newer versions are of the app or DB are released.
After reading through some forums online, there seem to be two feasible ways I could provide the DB:
1. Bundle it along with the .APK file of the app, or
2. Upload it to my app's website from where users will have to download it
I want to know which method would be better (if there is yet another approach other than these, please do let me know).
After pondering this problem for some time, I have these thoughts regarding the above approaches:
Approach 1:
Users will obtain the DB along with the app, and won't have to download it separately. Installation would thereby be easier. But, users will have to reinstall the app every time there is a new version of the DB. Also, if the DB is large, it will make the installable too cumbersome.
Approach 2:
Users will have to download the full DB from the website (although I can provide a small, sample version of the DB via Approach 1). But, the installer will be simpler and smaller in size. Also, I would be able to provide future versions of the DB easily for those who might not want newer versions of the app.
Could you please tell me from a technical and an administrative standpoint which approach would be the better one and why?
If there is a third or fourth approach better than either of these, please let me know.
Thank you!
Andruid
I built a similar app for Android which gets periodic updates with data from a government agency. It's fairly easy to build an Android compatible db off the device using perl or similar and download it to the phone from a website; and this works rather well, plus the user gets current data whenever they download the app. It's also supposed to be possible to throw the data onto the sdcard if you want to avoid using primary data storage space, which is a bigger concern for my app which has a ~6Mb database.
In order to make Android happy with the DB, I believe you have to do the following (I build my DB using perl).
$st = $db->prepare( "CREATE TABLE \"android_metadata\" (\"locale\" TEXT DEFAULT 'en_US')");
$st->execute();
$st = $db->prepare( "INSERT INTO \"android_metadata\" VALUES ('en_US')");
$st->execute();
I have an update activity which checks weather updates are available and if so presents an "update now" screen. The download process looks like this and lives in a DatabaseHelperClass.
public void downloadUpdate(final Handler handler, final UpdateActivity updateActivity) {
URL url;
try {
close();
File f = new File(getDatabasePath());
if (f.exists()) {
f.delete();
}
getReadableDatabase();
close();
url = new URL("http://yourserver.com/" + currentDbVersion + ".sqlite");
URLConnection urlconn = url.openConnection();
final int contentLength = urlconn.getContentLength();
Log.i(TAG, String.format("Download size %d", contentLength));
handler.post(new Runnable() {
public void run() {
updateActivity.setProgressMax(contentLength);
}
});
InputStream is = urlconn.getInputStream();
// Open the empty db as the output stream
OutputStream os = new FileOutputStream(f);
// transfer bytes from the inputfile to the outputfile
byte[] buffer = new byte[1024 * 1000];
int written = 0;
int length = 0;
while (written < contentLength) {
length = is.read(buffer);
os.write(buffer, 0, length);
written += length;
final int currentprogress = written;
handler.post(new Runnable() {
public void run() {
Log.i(TAG, String.format("progress %d", currentprogress));
updateActivity.setCurrentProgress(currentprogress);
}
});
}
// Close the streams
os.flush();
os.close();
is.close();
Log.i(TAG, "Download complete");
openDatabase();
} catch (Exception e) {
Log.e(TAG, "bad things", e);
}
handler.post(new Runnable() {
public void run() {
updateActivity.refreshState(true);
}
});
}
Also note that I keep a version number in the filename of the db files, and a pointer to the current one in a text file on the server.
It sounds like your app and your db are tightly bound -- that is, the db is useless without the database and the database is useless without the app, so I'd say go ahead and put them both in the same .apk.
That being said, if you expect the db to change very slowly over time, but the app to change quicker, and you don't want your users to have to download the db with each new app revision, then you might want to unbundle them. To make this work, you can do one of two things:
Install them as separate applications, but make sure they share the same userID using the sharedUserId tag in the AndroidManifest.xml file.
Install them as separate applications, and create a ContentProvider for the database. This way other apps could make use of your database as well (if that is useful).
If you are going to store the db on your website then I would recommend that you just make rpc calls to your webserver and get data that way, so the device will never have to deal with a local database. Using a cache manager to avoid multiple lookups will help as well so pages will not have to lookup data each time a page reloads. Also if you need to update the data you do not have to send out a new app every time. Using HttpClient is pretty straight forward, if you need any examples please let me know

Resources