Is there any Solr API that can tell when all cores on that particular Solr node are loaded and can serve queries (legacy mode)? - solr

I tried /solr/api/cores and I think that is what I want, although, if Solr has just rebooted then it returns with the statuses of however many cores are loaded so far. It does not halt for all cores. Is there a way to make it halt till all the cores are loaded and queriable?

What I ended up doing was writing a custom Solr plugin. Here is the code:
public class CustomHealthCheckHandler extends RequestHandlerBase {
private static final Logger LOG = LoggerFactory.getLogger(CustomHealthCheckHandler.class);
#Override
public String getDescription() {
return "A Simple healthcheck handler that checks if all cores are loaded and queriable";
}
#Override
public void handleRequestBody(SolrQueryRequest req, SolrQueryResponse resp) {
CoreContainer coreContainer = req.getCore().getCoreContainer();
// Get list of core names regardless of whether they are loaded or not
// Notice that we are intenetionally not using coreContainer.getLoadedCoreNames()
// or coreContainer.geAllCoreNames() here because they might not return all the cores
// in the case Solr is just restarted and all the cores are not loaded yet.
Path solrHome = Paths.get(coreContainer.getSolrHome());
CorePropertiesLocator locator = new CorePropertiesLocator(solrHome);
List<CoreDescriptor> coreDescriptors = locator.discover(coreContainer);
Collection<String> cores = coreDescriptors.stream().map(cd -> cd.getName()).collect(java.util.stream.Collectors.toList());
for (String core : cores) {
// get the /admin/ping handler for each core
SolrRequestHandler handler = coreContainer.getCore(core).getRequestHandler("/admin/ping");
// if handler is null, then return with UNHEALTHY status
if (handler == null) {
resp.add("status", "UNHEALTHY");
return;
}
SolrQueryResponse response = new SolrQueryResponse();
SolrQuery query = new SolrQuery();
query.set("wt", "json");
query.set("indent", true);
// execute the query
handler.handleRequest(new SolrQueryRequestBase(coreContainer.getCore(core), query) {}, response);
String status = ((String) response.getValues().get("status"));
// if status is null or not OK, then return
if (status == null || !status.equals("OK")) {
resp.add("status", "UNHEALTHY");
return;
}
}
resp.add("status", "HEALTHY");
}
}

Related

Hystrix Circuit breaker not opening the circuit

I am implementing Circuit breaker using Hystrix in my Spring boot application, my code is something like below:
#service
public class MyServiceHandler {
#HystrixCommand(fallbackMethod="fallback")
public String callService() {
// if(remote service is not reachable
// throw ServiceException
}
public String fallback() {
// return default response
}
}
// In application.properties, I have below properties defined:
hystrix.command.default.execution.isolation.thread.timeoutInMilliseconds=10000
hystrix.command.default.circuitBreaker.requestVolumeThreshold=3
hystrix.command.default.circuitBreaker.sleepWindowInMilliseconds=30000
hystrix.threadpool.default.coreSize=4
hystrix.threadpool.default.metrics.rollingStats.timeInMilliseconds=200000
I see that the fallback() is getting called with each failure of callService(). However, the circuit is not opening after 3 failures. After 3 failures, I was expecting that it will directly call fallback() and skip callService(). But this is not happening. Can someone advise what I am doing wrong here?
Thanks,
B Jagan
Edited on 26th July to add more details below:
Below is the actual code. I played a bit further with this. I see that the Circuit opens as expected on repeated failured when I call the remote service directly in the RegistrationHystrix.registerSeller() method. But, when I wrap the remote service call within Spring retry template, it keeps going into fallback method, but circuit never opens.
#Service
public class RegistrationHystrix {
Logger logger = LoggerFactory.getLogger(RegistrationHystrix.class);
private RestTemplate restTemplate;
private RetryTemplate retryTemplate;
public RegistrationHystrix(RestTemplate restTemplate) {
this.restTemplate = restTemplate;
retryTemplate = new RetryTemplate();
FixedBackOffPolicy fixedBackOffPolicy = new FixedBackOffPolicy();
fixedBackOffPolicy.setBackOffPeriod(1000l);
retryTemplate.setBackOffPolicy(fixedBackOffPolicy);
SimpleRetryPolicy retryPolicy = new SimpleRetryPolicy();
retryPolicy.setMaxAttempts(3);
retryTemplate.setRetryPolicy(retryPolicy);
}
#HystrixCommand(fallbackMethod = "fallbackForRegisterSeller", commandKey = "ordermanagement")
public String registerSeller(SellerDto sellerDto) throws Exception {
String response = retryTemplate.execute(new RetryCallback<String, Exception>() {
#Override
public String doWithRetry(RetryContext context) {
logger.info(String.format("Retry count %d", context.getRetryCount()));
return restTemplate.postForObject("/addSeller", sellerDto, String.class);
}
});
return response;
}
public List<SellerDto> getSellersList() {
return restTemplate.getForObject("/sellersList", List.class);
}
public String fallbackForRegisterSeller(SellerDto sellerDto, Throwable t) {
logger.error("Inside fall back, cause - {}", t.toString());
return "Inside fallback method. Some error occured while calling service for seller registration";
}
}
Below is the service class which in turn calls the above Hystrix wrapped service. This class in turn is invoked by a controller.
#Service
public class RegistrationServiceImpl implements RegistrationService {
Logger logger = LoggerFactory.getLogger(RegistrationServiceImpl.class);
private RegistrationHystrix registrationHystrix;
public RegistrationServiceImpl(RegistrationHystrix registrationHystrix) {
this.registrationHystrix = registrationHystrix;
}
#Override
public String registerSeller(SellerDto sellerDto) throws Exception {
long start = System.currentTimeMillis();
String registerSeller = registrationHystrix.registerSeller(sellerDto);
logger.info("add seller call returned in - {}", System.currentTimeMillis() - start);
return registerSeller;
}
So, I am trying to understand why the Circuit breaker is not working as expected when using it along with Spring RetryTemplate.
You should be using metrics.healthSnapshot.intervalInMilliseconds while testing. I guess you are executing all 3 request within default 500 ms and hence the circuit isn't getting open. You can either decrease this interval or you may put a sleep between the 3 requests.

Akka Streams- a Merge stage sometimes pushes downstream only once all upstream sources pushed to it

I have been experimenting with writing a custom Source in Java. Specifically, I wrote a Source that takes elements from a BlockingQueue. I'm aware of Source.queue, however I don't know how to get the materialized value if I connect several of those to a Merge stage. Anyway, here's the implementation:
public class TestingSource extends GraphStage<SourceShape<String>> {
private static final ExecutorService executor = Executors.newCachedThreadPool();
public final Outlet<String> out = Outlet.create("TestingSource.out");
private final SourceShape<String> shape = SourceShape.of(out);
private final BlockingQueue<String> queue;
private final String identifier;
public TestingSource(BlockingQueue<String> queue, String identifier) {
this.queue = queue;
this.identifier = identifier;
}
#Override
public SourceShape<String> shape() {
return shape;
}
#Override
public GraphStageLogic createLogic(Attributes inheritedAttributes) {
return new GraphStageLogic(shape()) {
private AsyncCallback<BlockingQueue<String>> callBack;
{
setHandler(out, new AbstractOutHandler() {
#Override
public void onPull() throws Exception {
String string = queue.poll();
if (string == null) {
System.out.println("TestingSource " + identifier + " no records in queue, invoking callback");
executor.submit(() -> callBack.invoke(queue)); // necessary, otherwise blocks upstream
} else {
System.out.println("TestingSource " + identifier + " found record during pull, pushing");
push(out, string);
}
}
});
}
#Override
public void preStart() {
callBack = createAsyncCallback(queue -> {
String string = null;
while (string == null) {
Thread.sleep(100);
string = queue.poll();
}
push(out, string);
System.out.println("TestingSource " + identifier + " found record during callback, pushed");
});
}
};
}
}
This example works, so it seems that my Source is working properly:
private static void simpleStream() throws InterruptedException {
BlockingQueue<String> queue = new LinkedBlockingQueue<>();
Source.fromGraph(new TestingSource(queue, "source"))
.to(Sink.foreach(record -> System.out.println(record)))
.run(materializer);
Thread.sleep(2500);
queue.add("first");
Thread.sleep(2500);
queue.add("second");
}
I also wrote an example that connects two of the Sources to a Merge stage:
private static void simpleMerge() throws InterruptedException {
BlockingQueue<String> queue1 = new LinkedBlockingQueue<>();
BlockingQueue<String> queue2 = new LinkedBlockingQueue<>();
final RunnableGraph<?> result = RunnableGraph.fromGraph(GraphDSL.create(
Sink.foreach(record -> System.out.println(record)),
(builder, out) -> {
final UniformFanInShape<String, String> merge =
builder.add(Merge.create(2));
builder.from(builder.add(new TestingSource(queue1, "queue1")))
.toInlet(merge.in(0));
builder.from(builder.add(new TestingSource(queue2, "queue2")))
.toInlet(merge.in(1));
builder.from(merge.out())
.to(out);
return ClosedShape.getInstance();
}));
result.run(materializer);
Thread.sleep(2500);
System.out.println("seeding first queue");
queue1.add("first");
Thread.sleep(2500);
System.out.println("seeding second queue");
queue2.add("second");
}
Sometimes this example works as I expect- it prints "first" after 5 seconds, and then prints "second" after another 5 seconds.
However, sometimes (about 1 in 5 runs) it prints "second" after 10 seconds, and then immediately print "first". In other words, the Merge stage pushes the strings downstream only when both Sources pushed something.
The full output looks like this:
TestingSource queue1 no records in queue, invoking callback
TestingSource queue2 no records in queue, invoking callback
seeding first queue
seeding second queue
TestingSource queue2 found record during callback, pushed
second
TestingSource queue2 no records in queue, invoking callback
TestingSource queue1 found record during callback, pushed
first
TestingSource queue1 no records in queue, invoking callback
This phenomenon happens more frequently with MergePreferred and MergePrioritized.
My question is- is this the correct behavior of Merge? If not, what am I doing wrong?
At first glance, blocking the thread with a Thread.sleep in the middle of the stage seems to be at least one of the problems.
Anyway, I think it would be way easier to use Source.queue, as you mention in the beginning of your question. If the issue is to extract the materialized value when using the GraphDSL, here's how you do it:
final Source<String, SourceQueueWithComplete<String>> source = Source.queue(100, OverflowStrategy.backpressure());
final Sink<Object, CompletionStage<akka.Done>> sink = Sink.ignore();
final RunnableGraph<Pair<SourceQueueWithComplete<String>, CompletionStage<akka.Done>>> g =
RunnableGraph.fromGraph(
GraphDSL.create(
source,
sink,
Keep.both(),
(b, src, snk) -> {
b.from(src).to(snk);
return ClosedShape.getInstance();
}
)
);
g.run(materializer); // this gives you back the queue
More info on this in the docs.

How to retrieve all instances on the JHipster API for entities

When calling the generated api when using a paginator, is there any way i can call the generated REST-api to retrieve ALL instances of an object, insted of only the first 20,30,40 etc?
I find that since i am using pagination for my entity-creation and management, when i want to utilize these entities in other views (self created), then the API does not provide all the instances when calling the entity.query() in angular/js.
Is this a limitation to JHipster, or can i call the REST-API in any other way supplying info to discard the paginator?
You can modify existing rest controller for that entity. Here is an example with a Center entity.
I return all centers if there is no value for offset and limit.
#RequestMapping(value = "/centers",
method = RequestMethod.GET,
produces = MediaType.APPLICATION_JSON_VALUE)
#Timed
public ResponseEntity<List<Center>> getAll(#RequestParam(value = "page" , required = false) Integer offset,
#RequestParam(value = "per_page", required = false) Integer limit)
throws URISyntaxException {
if(offset == null && limit == null) {
return new ResponseEntity<List<Center>>(centerRepository.findAll(), HttpStatus.OK);
} else {
Page<Center> page = centerRepository.findAll(PaginationUtil.generatePageRequest(offset, limit));
HttpHeaders headers = PaginationUtil.generatePaginationHttpHeaders(page, "/api/centers", offset, limit);
return new ResponseEntity<List<Center>>(page.getContent(), headers, HttpStatus.OK);
}
}
Then in angular, you just have to call Center.query(); without params.
It's an old question but for anyone who's looking for easy solution. You need to override default PageableHandlerMethodArgumnetResolver bean:
#Configuration
public class CustomWebConfigurer implements WebMvcConfigurer {
#Override
public void addArgumentResolvers(List<HandlerMethodArgumentResolver> argumentResolvers) {
PageableHandlerMethodArgumentResolver resolver = new PageableHandlerMethodArgumentResolver();
resolver.setFallbackPageable(Pageable.unpaged());
argumentResolvers.add(resolver);
}
}

Session-Per-Request with SqlConnection / System.Transactions

I've just started using Dapper for a project, having mostly used ORMs like NHibernate and EF for the past few years.
Typically in our web applications we implement session per request, beginning a transaction at the start of the request and committing it at the end.
Should we do something similar when working directly with SqlConnection / System.Transactions?
How does StackOverflow do it?
Solution
Taking the advice of both #gbn and #Sam Safron I'm not using transactions. In my case I'm only doing read queries so it seems there is no real requirement to use transactions (contrary to what I've been told about implicit transactions).
I create a lightweight session interface so that I can use a connection per request. This is quite beneficial to me as with Dapper I often need to create a few different queries to build up an object and would rather share the same connection.
The work of scoping the connection per request and disposing it is done by my IoC container (StructureMap):
public interface ISession : IDisposable {
IDbConnection Connection { get; }
}
public class DbSession : ISession {
private static readonly object #lock = new object();
private readonly ILogger logger;
private readonly string connectionString;
private IDbConnection cn;
public DbSession(string connectionString, ILogger logger) {
this.connectionString = connectionString;
this.logger = logger;
}
public IDbConnection Connection { get { return GetConnection(); } }
private IDbConnection GetConnection() {
if (cn == null) {
lock (#lock) {
if (cn == null) {
logger.Debug("Creating Connection");
cn = new SqlConnection(connectionString);
cn.Open();
logger.Debug("Opened Connection");
}
}
}
return cn;
}
public void Dispose() {
if (cn != null) {
logger.Debug("Disposing connection (current state '{0}')", cn.State);
cn.Dispose();
}
}
}
This is what we do:
We define a static called DB on an object called Current
public static DBContext DB
{
var result = GetContextItem<T>(itemKey);
if (result == null)
{
result = InstantiateDB();
SetContextItem(itemKey, result);
}
return result;
}
public static T GetContextItem<T>(string itemKey, bool strict = true)
{
#if DEBUG // HttpContext is null for unit test calls, which are only done in DEBUG
if (Context == null)
{
var result = CallContext.GetData(itemKey);
return result != null ? (T)result : default(T);
}
else
{
#endif
var ctx = HttpContext.Current;
if (ctx == null)
{
if (strict) throw new InvalidOperationException("GetContextItem without a context");
return default(T);
}
else
{
var result = ctx.Items[itemKey];
return result != null ? (T)result : default(T);
}
#if DEBUG
}
#endif
}
public static void SetContextItem(string itemKey, object item)
{
#if DEBUG // HttpContext is null for unit test calls, which are only done in DEBUG
if (Context == null)
{
CallContext.SetData(itemKey, item);
}
else
{
#endif
HttpContext.Current.Items[itemKey] = item;
#if DEBUG
}
#endif
}
In our case InstantiateDB returns an L2S context, however in your case it could be an open SQLConnection or whatever.
On our application object we ensure that our connection is closed at the end of the request.
protected void Application_EndRequest(object sender, EventArgs e)
{
Current.DisposeDB(); // closes connection, clears context
}
Then anywhere in your code where you need access to the db you simple call Current.DB and stuff automatically works. This is also unit test friendly due to all the #if DEBUG stuff.
We do not start any transactions per session, if we did and had updates at the beginning of our session, we would get serious locking issues, as the locks would not be released till the end.
You'd only start a SQL Server Transaction when you need to with something like TransactionScope when you call the database with a "write" call.
See a random example in this recent question: Why is a nested transaction committed even if TransactionScope.Complete() is never called?
You would not open a connection and start a transaction per http request. Only on demand. I'm having difficulty understanding why some folk advocate opening a database transaction per session: sheer idiocy when you look at what a database transaction is
Note: I'm not against the pattern per se. I am against unnecessary, too long, client-side database transactions that invoke MSDTC

How to run batched WCF service calls in Silverlight BackgroundWorker

Is there any existing plumbing to run WCF calls in batches in a BackgroundWorker?
Obviously since all Silverlight WCF calls are async - if I run them all in a backgroundworker they will all return instantly.
I just don't want to implement a nasty hack if theres a nice way to run service calls and collect the results.
Doesnt matter what order they are done in
All operations are independent
I'd like to have no more than 5 items running at once
Edit: i've also noticed (when using Fiddler) that no more than about 7 calls are able to be sent at any one time. Even when running out-of-browser this limit applies. Is this due to my default browser settings - or configurable also. obviously its a poor man's solution (and not suitable for what i want) but something I'll probably need to take account of to make sure the rest of my app remains responsive if i'm running this as a background task and don't want it using up all my connections.
I think your best bet would be to have your main thread put service request items into a Queue that is shared with a BackgroundWorker thread. The BackgroundWorker can then read from the Queue, and when it detects a new item, initiate the async WCF service request, and setup to handle the AsyncCompletion event. Don't forget to lock the Queue before you call Enqueue() or Dequeue() from different threads.
Here is some code that suggests the beginning of a solution:
using System;
using System.Collections.Generic;
using System.ComponentModel;
namespace MyApplication
{
public class RequestItem
{
public string RequestItemData { get; set; }
}
public class ServiceHelper
{
private BackgroundWorker _Worker = new BackgroundWorker();
private Queue<RequestItem> _Queue = new Queue<RequestItem>();
private List<RequestItem> _ActiveRequests = new List<RequestItem>();
private const int _MaxRequests = 3;
public ServiceHelper()
{
_Worker.DoWork += DoWork;
_Worker.RunWorkerAsync();
}
private void DoWork(object sender, DoWorkEventArgs e)
{
while (!_Worker.CancellationPending)
{
// TBD: Add a N millisecond timer here
// so we are not constantly checking the Queue
// Don't bother checking the queue
// if we already have MaxRequests in process
int _NumRequests = 0;
lock (_ActiveRequests)
{
_NumRequests = _ActiveRequests.Count;
}
if (_NumRequests >= _MaxRequests)
continue;
// Check the queue for new request items
RequestItem item = null;
lock (_Queue)
{
RequestItem item = _Queue.Dequeue();
}
if (item == null)
continue;
// We found a new request item!
lock (_ActiveRequests)
{
_ActiveRequests.Add(item);
}
// TBD: Initiate an async service request,
// something like the following:
try
{
MyServiceRequestClient proxy = new MyServiceRequestClient();
proxy.RequestCompleted += OnRequestCompleted;
proxy.RequestAsync(item);
}
catch (Exception ex)
{
}
}
}
private void OnRequestCompleted(object sender, RequestCompletedEventArgs e)
{
try
{
if (e.Error != null || e.Cancelled)
return;
RequestItem item = e.Result;
lock (_ActiveRequests)
{
_ActiveRequests.Remove(item);
}
}
catch (Exception ex)
{
}
}
public void AddRequest(RequestItem item)
{
lock (_Queue)
{
_Queue.Enqueue(item);
}
}
}
}
Let me know if I can offer more help.

Resources