Only one usage of each socket address (protocol/network address/port) is normally permitted - azure-cognitive-search

The last few weeks we have been experiencing this error message while using the Azure Search SDK (1.1.1 - 1.1.2) and performing searches.
We consume the Search SDK from internal APIs (deployed as Azure Web Apps) that scale up-down based on traffic (so there could be more than 1 instance of the APIs doing the searches).
Our API queries 5 different indexes and maintains an in-memory copy of the SearchIndexClient object that corresponds to each index, a very simple implementation would look like:
public class AzureSearchService
{
private readonly SearchServiceClient _serviceClient;
private Dictionary<string, SearchIndexClient> _clientDictionary;
public AzureSearchService()
{
_serviceClient = new SearchServiceClient("myservicename", new SearchCredentials("myservicekey"));
_clientDictionary = new Dictionary<string, SearchIndexClient>();
}
public SearchIndexClient GetClient(string indexName)
{
try
{
if (!_clientDictionary.ContainsKey(indexName))
{
_clientDictionary.Add(indexName, _serviceClient.Indexes.GetClient(indexName));
}
return _clientDictionary[indexName];
}
catch
{
return null;
}
}
public async Task<SearchResults> SearchIndex(SearchIndexClient client, string text)
{
var parameters = new SearchParameters();
parameters.Top = 10;
parameters.IncludeTotalResultCount = true;
var response = await client.Documents.SearchWithHttpMessagesAsync(text, parameters, null, null);
return response.Body;
}
}
And the API would invoke the service by:
public class SearchController : ApiController
{
private readonly AzureSearchService service;
public SearchController()
{
service = new AzureSearchService();
}
public async Task<HttpResponseMessage> Post(string indexName, [FromBody] string text)
{
var indexClient = service.GetClient(indexName);
var results = await service.SearchIndex(indexClient, text);
return Request.CreateResponse(HttpStatusCode.OK, results, Configuration.Formatters.JsonFormatter);
}
}
We are using SearchWithHttpMessagesAsync due to a requirement to receive custom HTTP headers instead of the SearchAsync method.
This way we avoid opening/closing the client under traffic bursts. Before using this memory cache (and wrapping each client on a using clause) we would get port exhaustion alerts on Azure App Services.
Is this a good pattern? Could we be receiving this error because of the multiple instances running in parallel?
In case it is needed, the stack trace shows:
System.Net.Http.HttpRequestException: Only one usage of each socket address (protocol/network address/port) is normally permitted service.ip.address.hidden:443
[SocketException:Only one usage of each socket address (protocol/network address/port)is normally permitted service.ip.address.hidden:443]
at System.Net.Sockets.Socket.EndConnect(IAsyncResult asyncResult)
at System.Net.ServicePoint.ConnectSocketInternal(Boolean connectFailure,Socket s4,Socket s6,Socket& socket,IPAddress& address,ConnectSocketState state,IAsyncResult asyncResult,Exception& exception)
[WebException:Unable to connect to the remote server]
at System.Net.HttpWebRequest.EndGetRequestStream(IAsyncResult asyncResult,TransportContext& context)
at System.Net.Http.HttpClientHandler.GetRequestStreamCallback(IAsyncResult ar)
EDIT: We are also receiving this error A connection attempt failed because the connected party did not properly respond after a period of time:
System.Net.Http.HttpRequestException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond service.ip.address.hidden:443
[SocketException:A connection attempt failed because the connected party did not properly respond after a period of time,or established connection failed because connected host has failed to respond service.ip.address.hidden:443]
at System.Net.Sockets.Socket.EndConnect(IAsyncResult asyncResult)
at System.Net.ServicePoint.ConnectSocketInternal(Boolean connectFailure,Socket s4,Socket s6,Socket& socket,IPAddress& address,ConnectSocketState state,IAsyncResult asyncResult,Exception& exception)
[WebException:Unable to connect to the remote server]
at System.Net.HttpWebRequest.EndGetRequestStream(IAsyncResult asyncResult,TransportContext& context)
at System.Net.Http.HttpClientHandler.GetRequestStreamCallback(IAsyncResult ar)

As implemented in the code in your question, the cache will not prevent port exhaustion. This is because you're instantiating it as a field of the ApiController, which is created once per request. If you want to avoid port exhaustion, the cache must be shared across all requests. To make it concurrency-safe, you should use something like ConcurrentDictionary instead of Dictionary.
The "connection attempt failed" error is likely unrelated.

Related

SignalR connection 400 error (Bad Request)

I'm trying build a SignalR proof of concept where two applications are involved; one is a web single-page application and the other one is a server-side RESTful web api. The technology/framework being used is ReactJs, ASP.NET Web API 2 (.NET Framework 4.6, NOT .NET Framework Core) and SignalR.
The Web API
This is how I have SignalR wired-up in the server application. When the application starts, I map SignalR to the application pipeline...
public static void ConfigureSignarlR(IAppBuilder app)
{
app.MapSignalR<ChatConnection>("/signalr", new Microsoft.AspNet.SignalR.HubConfiguration
{
EnableDetailedErrors = true
});
}
The ChatConnection class is an implementation of PersistentConnection that does nothing special...
public class ChatConnection : PersistentConnection
{
protected override Task OnReceived(IRequest request, string connectionId, string data)
{
return base.OnReceived(request, connectionId, data);
}
protected override Task OnConnected(IRequest request, string connectionId)
{
return base.OnConnected(request, connectionId);
}
public override Task ProcessRequest(HostContext context)
{
return base.ProcessRequest(context);
}
}
and then I have a very simple hub...
public class ChatHub : Hub
{
public void Send(string name, string message)
{
Clients.All.broadcastMessage(name, message);
}
}
The Client App
For the client application I'm using the #aspnet/signalr-client npm package...this is how I create and start the connection...
initialize = () => {
const hubCon = new HubConnection("http://api.domain/signalr");
hubCon.start()
.then(() => console.log("Connection established..."))
.catch(err => console.log(err))
}
Things to be noticed
Both the API and the client app are hosted on the same local IIS server but with different host names (using host files)
When using the browser to navigate to http://api.domain/signalr/hubs, I get a 400 (Bad Request) response when the message following message Protocol error: Unknown transport.
When attempting to connect from the client app, I get the same error message
The ProcessRequest method is the only one that gets hit when debugging the ChatConnection class
Question(s)
What did I miss here? Or how can I get this PoC to work?
The question is quite broad because I seriously have no clue of what's going on here
After a bit of digging and reading through SignalR documentation I realized that I was doing everything wrong. Basically, SignalR implements two different connection patterns:
Hubs: a high-level API built on top of the Persistent connection API
Persistent connections
A client cannot communicate with a persistent connection endpoint using a Hub proxy (or at least not the way I was doing it). So, what I did was:
Kept the PersistentConnection but overrided the OnReceived method so it can broardcast to all clients
protected override async Task OnReceived(IRequest request, string connectionId, string data)
{
await Connection.BroadCast("message to broadcast");
}
Removed the "signalr\hubs" script reference because it's not needed
Registered the connection on start up (server-side)
app.MapSignalR<ChatConnection>("/chat");
Finnaly, on the client side, initialize the connection and register all necessary callbacks
this.connection = window.$.connection(process.env.REACT_APP_API_BASE_URI + "/chat");
this.connection.logging = true;
this.connection.received((data) => {
console.log("Received some data:")
console.log(data)
});
this.connection.start(() => {
console.log("Connection opened")
console.log("connectionId = " + this.connection.id)
});

Pass byte array from WPF to WebApi

tl;dr What is the best way to pass binary data (up to 1MBish) from a WPF application to a WebAPI service method?
I'm currently trying to pass binary data from a WPF application to a WebAPI web service, with variable results. Small files (< 100k) generally work fine, but any larger and the odds of success reduce.
A standard OpenFileDialog, and then File.ReadAllBytes pass the byte[] parameter into the client method in WPF. This always succeeds, and I then post the data to WebAPI via a PostAsync call and a ByteArrayContent parameter.
Is this the correct way to do this? I started off with a PostJSONAsync call, and passed the byte[] into that, but thought the ByteArrayContent seemed more appropriate, but neither work reliably.
Client Method in WPF
public static async Task<bool> UploadFirmwareMCU(int productTestId, byte[] mcuFirmware)
{
string url = string.Format("productTest/{0}/mcuFirmware", productTestId);
ByteArrayContent bytesContent = new ByteArrayContent(mcuFirmware);
HttpResponseMessage response = await GetClient().PostAsync(url, bytesContent);
....
}
WebAPI Method
[HttpPost]
[Route("api/productTest/{productTestId}/mcuFirmware")]
public async Task<bool> UploadMcuFirmware(int productTestId)
{
bool result = false;
try
{
Byte[] mcuFirmwareBytes = await Request.Content.ReadAsByteArrayAsync();
....
}
Web Config Settings
AFAIK these limits in web.config should be sufficient to allow 1MB files through to the service?
<security>
<requestFiltering>
<requestLimits maxAllowedContentLength="1073741824" />
</requestFiltering>
</security>
<httpRuntime targetFramework="4.5" maxRequestLength="2097152"/>
I receive errors in WebAPI when calling ReadAsByteArrayAsync(). These vary, possibly due to the app pool in IIS Express having crashed / getting into a bad state, but they include the following (None of which have lead to any promising leads via google):
Specified argument was out of the range of valid values. Parameter name: offset
at System.Web.HttpInputStream.Seek(Int64 offset, SeekOrigin origin)\r\n
at System.Web.HttpInputStream.set_Position(Int64 value)\r\n at System.Web.Http.WebHost.SeekableBufferedRequestStream.SwapToSeekableStream()\r\n at System.Web.Http.WebHost.Seek
OR
Message = "An error occurred while communicating with the remote host. The error code is 0x800703E5."
InnerException = {"Overlapped I/O operation is in progress. (Exception from HRESULT: 0x800703E5)"}
at System.Web.Hosting.IIS7WorkerRequest.RaiseCommunicationError(Int32 result, Boolean throwOnDisconnect)\r\n
at System.Web.Hosting.IIS7WorkerRequest.ReadEntityCoreSync(Byte[] buffer, Int32 offset, Int32 size)\r\n
at System.Web.Hosting.IIS7WorkerRequ...
Initially I thought this was most likely down to IIS Express limitations (running on Windows 7 on my dev pc) but we've had the same issues on a staging server running Server 2012.
Any advice on how I might get this working would be great, or even just a basic example of uploading files to WebAPI from WPF would be great, as most of the code I've found out there relates to uploading files from multipart forms web pages.
Many thanks in advance for any help.
tl;dr It was a separate part of our code in the WebApi service that was causing it to go wrong, duh!
Ah, well, this is embarrassing.
It turns out our problem was down to a Request Logger class we'd registered in WebApiConfig.Register(HttpConfiguration config), and that I'd forgotten about.
It was reading the request content via async as StringContent, and then attempting to log it to the database in an ncarchar(max) field. This itself is probably OK, but I'm guessing all the weird problems started occurring when the LoggingHandler as well as the main WebApi controller, were both trying to access the Request content via async?
Removing the LoggingHandler fixed the problem immediately, and we're now able to upload files of up to 100MB without any problems. To fix it more permanently, I guess I rewrite of the LoggingHandler is required to set a limit on the maximum content size it tries to log / to ignore certain content types.
It's doubtful, but I hope this may be of use for someone one day!
public class LoggingHandler : DelegatingHandler
{
protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
{
LogRequest(request);
return base.SendAsync(request, cancellationToken).ContinueWith(task =>
{
var response = task.Result;
// ToDo: Decide if/when we need to log responses
// LogResponse(response);
return response;
}, cancellationToken);
}
private void LogRequest(HttpRequestMessage request)
{
(request.Content ?? new StringContent("")).ReadAsStringAsync().ContinueWith(x =>
{
try
{
var callerId = CallerId(request);
var callerName = CallerName(request);
// Log request
LogEntry logEntry = new LogEntry
{
TimeStamp = DateTime.Now,
HttpVerb = request.Method.ToString(),
Uri = request.RequestUri.ToString(),
CorrelationId = request.GetCorrelationId(),
CallerId = callerId,
CallerName = callerName,
Controller = ControllerName(request),
Header = request.Headers.ToString(),
Body = x.Result
};
...........

(Android Studio) Connecting an app to Google Endpoints Module

I'm having trouble following the second step here.
I really don't understand how this sample does anything other than return a simple toast message. How does it utilize the API to display that message?
class EndpointsAsyncTask extends AsyncTask<Pair<Context, String>, Void, String> {
private static MyApi myApiService = null;
private Context context;
#Override
protected String doInBackground(Pair<Context, String>... params) {
if(myApiService == null) { // Only do this once
MyApi.Builder builder = new MyApi.Builder(AndroidHttp.newCompatibleTransport(),
new AndroidJsonFactory(), null)
// options for running against local devappserver
// - 10.0.2.2 is localhost's IP address in Android emulator
// - turn off compression when running against local devappserver
.setRootUrl("http://10.0.2.2:8080/_ah/api/")
.setGoogleClientRequestInitializer(new GoogleClientRequestInitializer() {
#Override
public void initialize(AbstractGoogleClientRequest<?> abstractGoogleClientRequest) throws IOException {
abstractGoogleClientRequest.setDisableGZipContent(true);
}
});
// end options for devappserver
myApiService = builder.build();
}
context = params[0].first;
String name = params[0].second;
try {
return myApiService.sayHi(name).execute().getData();
} catch (IOException e) {
return e.getMessage();
}
}
#Override
protected void onPostExecute(String result) {
Toast.makeText(context, result, Toast.LENGTH_LONG).show();
}
I'm afraid my this sample is too complex for my limited knowledge. How exactly do I "talk" to the Google Endpoints Module when running an app? Specifically, What is EndpointsAsyncTask();?
Are there any resources listing all the methods available to me? Is there a simpler example of an app communicating with a Google Cloud Endpoint?
The service methods available to you are defined by the backend source in section 1.
In the example you posted, this line: myApiService.sayHi(name).execute()
is an actual invocation call to the backend that you defined by annotating #ApiMethod("sayHi") on the method in the MyEndpoint.java class of your backend module.
The reason your Android app defines an EndpointsAsyncTask is because slow operations such as calls that hit the network need to happen off of the UI thread to avoid locking the UI. The demo simply puts the returned value into a Toast but you could modify onPostExecute() to do whatever you'd like with the result.
For more info on Google Endpoints check out:
https://cloud.google.com/appengine/docs/java/endpoints/
And for info about using an Android AsyncTask look here:
http://developer.android.com/reference/android/os/AsyncTask.html

Timeout waiting for connection from pool - despite single SolrServer

We are having problems with our solrServer client's connection pool running out of connections in no time, even when using a pool of several hundred (we've tried 1024, just for good measure).
From what I've read, the following exception can be caused by not using a singleton HttpSolrServer object. However, see our XML config below, as well:
Caused by: org.apache.http.conn.ConnectionPoolTimeoutException: Timeout waiting for connection from pool
at org.apache.http.impl.conn.PoolingClientConnectionManager.leaseConnection(PoolingClientConnectionManager.java:232)
at org.apache.http.impl.conn.PoolingClientConnectionManager$1.getConnection(PoolingClientConnectionManager.java:199)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:455)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
at org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:448)
XML Config:
<solr:solr-server id="solrServer" url="http://solr.url.domain/"/>
<solr:repositories base-package="de.ourpackage.data.solr" multicore-support="true"/>
At this point, we are at a loss. We are running a web application on a tomcat7. Whenever a user requests a new website, we send one or more request to the Solr Server, requesting whatever we need, which are usually single entries or page of 20 (using Spring Data).
As for the rest of our implementation, we are using an abstract SolrOperationsrepository class, which is extended by each of our repositories (one repository for each core).
The following is how we set our solrServer. I suspect we are doing something fundamentally wrong here, which is why our connections are overflowing. According to the logs, they are always being returned into the pool, btw.
private SolrOperations solrOperations;
#SuppressWarnings("unchecked")
public final Class<T> getEntityClass() {
return (Class<T>)((ParameterizedType)getClass().getGenericSuperclass()).getActualTypeArguments()[0];
}
public final SolrOperations getSolrOperations() {
/*HttpSolrServer solrServer = (HttpSolrServer)solrOperations.getSolrServer();
solrServer.getHttpClient().getConnectionManager().closeIdleConnections(500, TimeUnit.MILLISECONDS);*/
logger.info("solrOperations: " + solrOperations);
return solrOperations;
}
#Autowired
public final void setSolrServer(SolrServer solrServer) {
try {
String core = SolrServerUtils.resolveSolrCoreName(getEntityClass());
SolrTemplate template = templateHolder.get(core);
/*solrServer.setConnectionTimeout(500);
solrServer.setMaxTotalConnections(2048);
solrServer.setDefaultMaxConnectionsPerHost(2048);
solrServer.getHttpClient().getConnectionManager().closeIdleConnections(500, TimeUnit.MILLISECONDS);*/
if ( template == null ) {
template = new SolrTemplate(new MulticoreSolrServerFactory(solrServer));
template.setSolrCore(core);
template.afterPropertiesSet();
logger.debug("Creating new SolrTemplate for core '" + core + "'");
templateHolder.put(core, template);
}
logger.debug("setting SolrServer " + template);
this.solrOperations = template;
} catch (Exception e) {
logger.error("cannot set solrServer...", e);
}
}
The code that is commented out has been mostly used for testing purposes. I also read somewhere else that you cannot manipulate the solrServer object on-the-fly. Which begs the question, how do I set a timeout/poolsize in the XML config?
The implementation of a repository looks like this:
#Repository(value="stellenanzeigenSolrRepository")
public class StellenanzeigenSolrRepositoryImpl extends SolrOperationsRepository<Stellenanzeige> implements StellenanzeigenSolrRepositoryCustom {
...
public Query createQuery(Criteria criteria, Sort sort, Pageable pageable) {
Query resultQuery = new SimpleQuery(criteria);
if ( pageable != null ) resultQuery.setPageRequest(pageable);
if ( sort != null ) resultQuery.addSort(sort);
return resultQuery;
}
public Page<Stellenanzeige> findBySearchtext(String searchtext, Pageable pageable) {
Criteria searchtextCriteria = createSearchtextCriteria(searchtext);
Query query = createQuery(searchtextCriteria, null, pageable);
return getSolrOperations().queryForPage(query, getEntityClass());
}
...
}
Can any of you point to mistakes that we've made, that could possibly lead to this issue? Like I said, we are at a loss. Thanks in advance, and I will, of course update the question as we make progress or you request more information.
The MulticoreServerFactory always returns an object of HttpClient, that only ever allows 2 concurrent connections to the same host, thus causing the above problem.
This seems to be a bug with spring-data-solr that can be worked around by creating a custom factory and overriding a few methods.
Edit: The clone method in MultiCoreSolrServerFactory is broken. This hasn't been corrected yet. As some of my colleagues have run into this issue recently, I will post a workaround here - create your own class and override one method.
public class CustomMulticoreSolrServerFactory extends MulticoreSolrServerFactory {
public CustomMulticoreSolrServerFactory(final SolrServer solrServer) {
super(solrServer);
}
#Override
protected SolrServer createServerForCore(final SolrServer reference, final String core) {
// There is a bug in the original SolrServerUtils.cloneHttpSolrServer()
// method
// that doesn't clone the ConnectionManager and always returns the
// default
// PoolingClientConnectionManager with a maximum of 2 connections per
// host
if (StringUtils.hasText(core) && reference instanceof HttpSolrServer) {
HttpClient client = ((HttpSolrServer) reference).getHttpClient();
String baseURL = ((HttpSolrServer) reference).getBaseURL();
baseURL = SolrServerUtils.appendCoreToBaseUrl(baseURL, core);
return new HttpSolrServer(baseURL, client);
}
return reference;
}
}

Silverlight Enabled WCF Service Exception Handling

I've got a Silverlight enabled WCF web service set up and I'm connecting to it from my Silverlight application.
The Service is not written using the ASync pattern but Silverlight generates the async methods automatically.
I have a method that within my service that has a chance of throwing an exception I can catch this exception but I'm not sure of the best way of handling this exception, I've noticed that the event args of the completed method contain an error property.
Is is possible to set the value of this error property?
Example Method
public class service
{
[OperationContract]
public Stream getData(string filename)
{
string filepath = HostingEnvironment.MapPath(filename);
FileInfo fi = new FileInfo(filenpath);
try
{
Stream s = fi.Open(FileMode.Open);
return s;
}
catch (IOException e)
{
return null;
}
}
}
Silverlight Code
btnFoo_Click(object sender, RoutedEventArgs e)
{
ServiceClient svc = new ServiceClient();
svc.getDataCompleted += new EventHandler<getDataCompletedEventArgs>(getData_Completed);
svc.getDataAsync("text.txt");
}
void getData_Completed(object sender, getDataCompletedEventArgs e)
{
e.Error //how can i set this value on the service?
}
Finally if the service is offline or times out is there anyway to catch this exception before it reaches the UnhandledException method within App.xaml?
Thanks
Since silverlight is using services asyncronously you dont get a synchronous exception throw, but instead it is stored in e.Error property, that you need to check in your ServiceCallCompleted method.
To answer your question
how can i set this value on the service?
Simply throw an exception on server and it can be enough given several other conditions.
You may want to introduce FaultContract on your WCF service method, and throw FaultException<T> which is a common way to deal with errors in WCF.
However fault result in return code 500 and silverlight won't be able to get response with such status code and have access to Fault object, even if you add that attribute to service.
This can be solved using several approaches.
Use the alternative client HTTP stack: You can register an alternative HTTP stack by using the RegisterPrefix method. See below for an outline of how to do this. Silverlight 4 provides the option of using a client HTTP stack which, unlike the default browser HTTP stack, allows you to process SOAP-compliant fault messages. However, a potential problem of switching to the alternative HTTP stack is that information stored by the browser (such as authentication cookies) will no longer be available to Silverlight, and thus certain scenarios involving secure services might stop working, or require additional code to work.
Modify the HTTP status code: You can modify your service to return SOAP faults with an HTTP status code of 200, Silverlight 4 so that faults will be processed successfully. How to do this is outlined below. Note that this will make the service non-compliant with the SOAP protocol, because SOAP requires a response code in the 400 or 500 range for faults. If the service is a WCF service, you can create an endpoint behavior that plugs in a message inspector that changes the status code to 200. Then you can create an endpoint specifically for Silverlight consumption, and apply the behavior there. Your other endpoints will still remain SOAP-compliant.
Faults in silverlight
Creating and Handling Faults in Silverlight
OR
[DataContract]
public class MyError
{
[DataMember]
public string Code { get; set; }
[DataMember]
public string Message { get; set; }
[DataMember]
public DateTime Time { get; set; }
}
public class service
{
[OperationContract]
public Stream getData(string filename, out MyError myError)
{
myError = null;
string filepath = HostingEnvironment.MapPath(filename);
FileInfo fi = new FileInfo(filenpath);
try
{
Stream s = fi.Open(FileMode.Open);
return s;
}
catch (IOException e)
{
myError = new MyError() { Code = "000", Message = ex.Message, Time = DateTime.Now };
return null;
}
}
}
I wish successful projects

Resources