I want to mock out my activemq instance in my unit tests. So I set up the queue as so:
camelContext = new DefaultCamelContext();
camelContext.setErrorHandlerBuilder(new LoggingErrorHandlerBuilder());
camelContext.getShutdownStrategy().setTimeout(SHUTDOWN_TIMEOUT_SECONDS);
routePolicy = new RoutePolicy();
routePolicy.setCamelContext(camelContext);
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory();
connectionFactory.setBrokerURL("vm:localhost");
// use a pooled connection factory between the module and the queue
pooledConnectionFactory = new PooledConnectionFactory(connectionFactory);
// how many connections should there be in the session pool?
pooledConnectionFactory.setMaxConnections(this.maxConnections);
pooledConnectionFactory.setMaximumActiveSessionPerConnection(this.maxActiveSessionPerConnection);
pooledConnectionFactory.setCreateConnectionOnStartup(true);
pooledConnectionFactory.setBlockIfSessionPoolIsFull(false);
JmsConfiguration jmsConfiguration = new JmsConfiguration(pooledConnectionFactory);
jmsConfiguration.setDeliveryPersistent(false);
ActiveMQComponent activeMQComponent = ActiveMQComponent.activeMQComponent("vm:localhost");
However, when I send a message to the queue like this:
producerTemplate.sendBody(uri, message);
the process hangs at
FailoverTransport.oneway:600
Any idea what I could be doing wrong using the embedded broker? This all works fine when connecting to a tcp endpoint.
You need to change the URL to vm://localhost (or even vm://localhost?broker.persistent=false which is common in unit tests to avoid temp data on disk).
Related
The code below successfully establishes a websocket connection.
The websockets server (also akk-http) deliberately closes the connection using Andrew's suggested answer here.
The SinkActor below receives a message of type akka.actor.Status.Failure so I know that the flow of messages from Server to Client has been disrupted.
My question is ... How should my client reestablish the websocket connection? Has source.via(webSocketFlow).to(sink).run() completed?
What is best practice for cleaning up the resources and retrying the websocket connection?
class ConnectionAdminActor extends Actor with ActorLogging {
implicit val system: ActorSystem = context.system
implicit val flowMaterializer = ActorMaterializer()
private val sinkActor = context.system.actorOf(Props[SinkActor], name = "SinkActor")
private val sink = Sink.actorRefWithAck[Message](sinkActor, StartupWithActor(self.path), Ack, Complete)
private val source = Source.actorRef[TextMessage](10, OverflowStrategy.dropHead).mapMaterializedValue {
ref => {
self ! StartupWithActor(ref.path)
ref
}
}
private val webSocketFlow: Flow[Message, Message, Future[WebSocketUpgradeResponse]] =
Http().webSocketClientFlow(WebSocketRequest("ws://localhost:8080"))
source
.via(webSocketFlow)
.to(sink)
.run()
Try the recoverWithRetries combinator (docs here).
This allows you to provide an alternative Source your pipeline will switch to, in case the upstream has failed. In the most simple case, you can just re-use the same Source, which should issue a new connection.
val wsSource = source via webSocketFlow
wsSource
.recoverWithRetries(attempts = -1, {case e: Throwable => wsSource})
.to(sink)
Note that
the attempts = -1 will retry to reconnect indefinetely
the partial function allows for more granular control over which exception can trigger a reconnect
I'm having trouble dealing with the Pull Task Queue REST API. Whenever I try it says "403 - you are not allowed to make this api call". I'm trying this in my computer, which is obviously out of the App and Compute Engine.
I have my Service account credential, my queue.xml in WEB-INF, and now I'm wondering if the queue must be created first before start using it ... is that necessary?
This is my code... Am I missing something?
JsonFactory JSON_FACTORY = JacksonFactory.getDefaultInstance();
HttpTransport httpTransport = GoogleNetHttpTransport.newTrustedTransport();
List<String> scopes = new ArrayList<>();
scopes.add(TaskqueueScopes.TASKQUEUE);
scopes.add(TaskqueueScopes.TASKQUEUE_CONSUMER);
ClassLoader classloader = Thread.currentThread().getContextClassLoader();
InputStream is = classloader.getResourceAsStream("credential-12356.json");
GoogleCredential credential = GoogleCredential.fromStream(is).createScoped(scopes);
Taskqueue taskQueue = new Taskqueue.Builder(httpTransport, JSON_FACTORY, credential).setApplicationName(APPLICATION_NAME).build();
Taskqueue.Taskqueues.Get request = taskQueue.taskqueues().get(projectId, taskQueueName);
request.setGetStats(true);
//Get the queue!
TaskQueue queue = request.execute();
Did you configure email address in your queue configuration in queue.xml?
<queue>
<name>pull-queuqueue</name>
<mode>pull</mode>
<rate>10/s</rate>
<acl>
<user-email>xyz#gmail.com</user-email>
</acl>
</queue>
The last few weeks we have been experiencing this error message while using the Azure Search SDK (1.1.1 - 1.1.2) and performing searches.
We consume the Search SDK from internal APIs (deployed as Azure Web Apps) that scale up-down based on traffic (so there could be more than 1 instance of the APIs doing the searches).
Our API queries 5 different indexes and maintains an in-memory copy of the SearchIndexClient object that corresponds to each index, a very simple implementation would look like:
public class AzureSearchService
{
private readonly SearchServiceClient _serviceClient;
private Dictionary<string, SearchIndexClient> _clientDictionary;
public AzureSearchService()
{
_serviceClient = new SearchServiceClient("myservicename", new SearchCredentials("myservicekey"));
_clientDictionary = new Dictionary<string, SearchIndexClient>();
}
public SearchIndexClient GetClient(string indexName)
{
try
{
if (!_clientDictionary.ContainsKey(indexName))
{
_clientDictionary.Add(indexName, _serviceClient.Indexes.GetClient(indexName));
}
return _clientDictionary[indexName];
}
catch
{
return null;
}
}
public async Task<SearchResults> SearchIndex(SearchIndexClient client, string text)
{
var parameters = new SearchParameters();
parameters.Top = 10;
parameters.IncludeTotalResultCount = true;
var response = await client.Documents.SearchWithHttpMessagesAsync(text, parameters, null, null);
return response.Body;
}
}
And the API would invoke the service by:
public class SearchController : ApiController
{
private readonly AzureSearchService service;
public SearchController()
{
service = new AzureSearchService();
}
public async Task<HttpResponseMessage> Post(string indexName, [FromBody] string text)
{
var indexClient = service.GetClient(indexName);
var results = await service.SearchIndex(indexClient, text);
return Request.CreateResponse(HttpStatusCode.OK, results, Configuration.Formatters.JsonFormatter);
}
}
We are using SearchWithHttpMessagesAsync due to a requirement to receive custom HTTP headers instead of the SearchAsync method.
This way we avoid opening/closing the client under traffic bursts. Before using this memory cache (and wrapping each client on a using clause) we would get port exhaustion alerts on Azure App Services.
Is this a good pattern? Could we be receiving this error because of the multiple instances running in parallel?
In case it is needed, the stack trace shows:
System.Net.Http.HttpRequestException: Only one usage of each socket address (protocol/network address/port) is normally permitted service.ip.address.hidden:443
[SocketException:Only one usage of each socket address (protocol/network address/port)is normally permitted service.ip.address.hidden:443]
at System.Net.Sockets.Socket.EndConnect(IAsyncResult asyncResult)
at System.Net.ServicePoint.ConnectSocketInternal(Boolean connectFailure,Socket s4,Socket s6,Socket& socket,IPAddress& address,ConnectSocketState state,IAsyncResult asyncResult,Exception& exception)
[WebException:Unable to connect to the remote server]
at System.Net.HttpWebRequest.EndGetRequestStream(IAsyncResult asyncResult,TransportContext& context)
at System.Net.Http.HttpClientHandler.GetRequestStreamCallback(IAsyncResult ar)
EDIT: We are also receiving this error A connection attempt failed because the connected party did not properly respond after a period of time:
System.Net.Http.HttpRequestException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond service.ip.address.hidden:443
[SocketException:A connection attempt failed because the connected party did not properly respond after a period of time,or established connection failed because connected host has failed to respond service.ip.address.hidden:443]
at System.Net.Sockets.Socket.EndConnect(IAsyncResult asyncResult)
at System.Net.ServicePoint.ConnectSocketInternal(Boolean connectFailure,Socket s4,Socket s6,Socket& socket,IPAddress& address,ConnectSocketState state,IAsyncResult asyncResult,Exception& exception)
[WebException:Unable to connect to the remote server]
at System.Net.HttpWebRequest.EndGetRequestStream(IAsyncResult asyncResult,TransportContext& context)
at System.Net.Http.HttpClientHandler.GetRequestStreamCallback(IAsyncResult ar)
As implemented in the code in your question, the cache will not prevent port exhaustion. This is because you're instantiating it as a field of the ApiController, which is created once per request. If you want to avoid port exhaustion, the cache must be shared across all requests. To make it concurrency-safe, you should use something like ConcurrentDictionary instead of Dictionary.
The "connection attempt failed" error is likely unrelated.
I am trying to retrieve a file from an ftp server with anonymous authentication using java.net.URLConnection.
try {
url = new URL("ftp://ftp2.sat.gob.mx/Certificados/FEA/000010/000002/02/03/05/00001000000202030500.cer");
URLConnection con = url.openConnection();
InputStream in = con.getInputStream();
ByteArrayOutputStream baos = new ByteArrayOutputStream();
byte[] buffer = new byte[4096];
int bytesRead;
while ((bytesRead = in.read(buffer)) >= 0)
{
baos.write(buffer, 0, bytesRead);
}
baos.flush();
arr = baos.toByteArray();
in.close();
} catch (Exception e) {
throw new Exception("Error SAT: " + e.getMessage());
}
The file i am trying to get is this, its in an anonymous authentication ftp site:
ftp://ftp2.sat.gob.mx/Certificados/FEA/000010/000002/02/03/05/00001000000202030500.cer
But every time I get this error:
Permission denied: Attempt to bind port without permission.
I am using GoogleAppEngine Java 1.7
Any kind of advise is welcome.
I'm not a Java guy, but I suspect you're trying to use "active" FTP, which is likely the default.
Active FTP works by binding to a port on the receiving computer (the client in this case) to which the sending server can connect to send the file; the port number is sent over in the get request. This doesn't work in many environments, e.g. NAT.
The usual solution is to use "passive" mode, which behaves more like HTTP and doesn't require any port binding. If there's a way in Java to twiddle that connection to use passive mode, it should bypass the permissions issue.
Most likely you have a non billing-enabled app, according to this post and this AppEngine Socket Java API documentation you just have to enable billing, if you have no budget set the limits to $0.
I am having a scenario where I am using consumer template to receive file from a endpoint. The Endpoint could be either File System or FTP site. Currently I am using only File System with following endpoint URL:
file://D:/metamodel/Seach.json?noop=true&idempotent=false
On every hit to following code:
Exchange exchange = consumerTemplate.receive(endPointURI, timeout);
if (exchange != null) {
String body = exchange.getIn().getBody(String.class);
consumerTemplate.doneUoW(exchange);
return body;
}
It creating a new Camel context thread and after some hits it giving error as
java.util.concurrent.RejectedExecutionException: PollingConsumer on Endpoint[file://D:/metamodel/Seach.json?noop=true&idempotent=false] is not started, but in state:Stopped
I am not sure why this is happening and its sporadic in nature.
Any suggestion on this would do great help.