Azure-Maps Get Search Address Structured API is not giving correct results for 'Michigan,USA' - azure-maps

I am not getting correct results for 'Mihigan,USA' while using Azure-Maps Get Search Address Structured API with parameter 'countrySubdivision' value as 'MI' or 'Michigan'
I tried below ways but not getting results for only 'Michigan,USA' other states in USA are returning results like AL,WA,WY,NY
https://atlas.microsoft.com/search/address/structured/json?api-version=1.0&subscription-key={subscription-key}&countryCode=US&countrySubdivision=MI
https://atlas.microsoft.com/search/address/structured/json?api-version=1.0&subscription-key={subscription-key}&countryCode=US&countrySubdivision=Michigan
https://atlas.microsoft.com/search/address/structured/json?api-version=1.0&subscription-key={subscription-key}&countryCode=US&countrySubdivision=MI
https://atlas.microsoft.com/search/address/structured/json?api-version=1.0&subscription-key={subscription-key}&countryCode=US&countrySubdivision=Michigan
I am expecting results for the 'Michigan,USA' state from Azure-Maps Get Search Address Structured API

It looks like the API uses all the query parameters in the request to construct the "query" for the Geocoder. So "MI, USA" will not necessarily understand "MI" (or even "Michigan") as describing the state Michigan and not the "Michigan city" in Indiana (Ideally it should i think). Something like "Michigan state USA" does return some results but I think you'll have to add some location bias like postalCode for the api to return more meaningful results.

The team has made some improvements around this. Testing the provided queries I'm seeing much more relevant results.

Related

TanStack Table (React-Table) | How to convert the column filter state into the format that the backend is excepting?

I'm trying to implement server-side column filtering for a table in my React app, but I'm having trouble converting the "ColumnFiltersState" object that I'm using on the frontend into the format that the backend (which uses the "nestjs-paginate" library) is expecting. The "ColumnFiltersState" object contains a "value" field that can be of type "unknown" and I'm not sure how to handle it. I have a couple of possible solutions in mind:
One solution could be to use the "filterFn" property for each column, and pass in the filter operator that the backend is expecting (e.g. '$eq', '$gt', etc.) along with the value.
Another approach would be to define a separate mapping function that maps the "ColumnFiltersState" object into the format that the backend is expecting, using the appropriate filter operator and value for each column, but then how we would know witch operator to use, maybe add custom meta prop to the coulmnDef.
Can anyone provide some guidance on how to correctly map the column filters for the backend, and which solution would be the best approach? I would really appreciate any feedback or advice on the best approach, and even better if there is an example code to help me understand the solution better.

Using Matomo API to get top 10 visited pages starting with a certain URL

For a weblog I am trying to get the top 10 popular posts from for example the last month. I figured I'd get the data out of Matomo, as that's already tracking visits and has an API. I've never used this API before though, so I've been reading the documentation and trying out some things. I am able to get data from the API using the Actions.getPageUrls method. However, when I try to filter using segment=^http://example.org/post I still get data from other URL's. It looks like it filters on session and gives back all data from the sessions that have at least 1 page that conforms to the filter.
The full URL I'm using is: http://example.org/matomo/index.php?&module=API&token_auth=12345&method=Actions.getPageUrls&format=json&idSite=1&period=month&date=today&expanded=1&segment=pageUrl%3D%5Ehttp%253A%252F%252Fexample.org%252Fpost. I've also tried with less and no URL encoding for the segment, but that doesn't seem to make a difference. If I use a URL that doesn't exist I get an empty array returned.
Am I doing something wrong? Is there a different way to only get the top pages with a URL starting with http://example.org/post? Or do I have to sift through the data myself to only get the pages I want?
I am using Matomo version 3.13.5.
I figured it out. There is no need to use segment. This can be achieved using the flat, filter_column and filter_pattern parameters.
Setting flat=1 will make it so all pages are returned in a single array, instead of hierarchically.
With filter_column and filter_pattern I can filter the results.
The URL I use now is: http://example.org/matomo/index.php?&module=API&token_auth=12345&method=Actions.getPageUrls&format=json&idSite=1&period=month&date=today&flat=1&filter_column=label&filter_pattern=%5E%2Fpost%2F. This does exactly what I want.
The unencoded pattern is ^/post/, so this will filter out any page that does not start with /post/.

CouchDB 2.0+ count all documents with a certain field value

I'm new to CouchDB and I'm looking for the best way to perform the following count:
I've got documents that look like this:
{
field_1: "some_string",
field_2: "some_other_string"
}
I want to count all documents that have a certain value in field_2, so that I know what´s the limit I should set when finding all documents with that field value via POST /{db}/_find using a selector.
I've read some things about Design Documents and Views and Query Protocol, but I can't understand what's the proper way of achieving this. Do I even need to count first the number of documents? Or is there a way to quickly know this when performing the lookup?
... when finding all documents with that field value ...
I assume your purpose is to find all docs with a specific field having a certain value.
I don't know much about CouchDB mango queries with /<db>/_find or /<db>/_index, but I think you can find all docs with a specific field value by just doing HTTP requests.
Assume you have documents like below, and you want to find all docs with title field equal to Jacob string:
{
"_id": "doc5",
"_rev": "1-27f5da5ca3157d4bca8b485d411e86c5",
"unicodeString": "יעקב",
"title": "Jacob"
}
You can implement a view map function like below. This view, sorts all docs inside the database according to the title field in a B-tree data structure:
function (doc) {
if(doc.title){
emit(doc.title, [doc.unicodeString])
}
}
Lets assume the name of the above view is by_title and the name of database is sample, then we can do a HTTP request like below to get all the docs with the title field equal to, for example, Jacob string:
$ curl -k -X GET https://joe:joe#192.168.1.106:6984/sample/_design/by_title/_view/by_title?key=\"Jacob\"
{"total_rows":15,"offset":8,"rows":[
{"id":"doc5","key":"Jacob","value":["יעקב"]},
{"id":"doc6","key":"Jacob","value":["ישראל"]}
]}
As you see the above HTTP GET request with curl returns all the docs with the title field equal to Jacob string. I'm using https along with 6984 port, since I'm using CouchDB with SLL and self-signed certificate, but you can use it with http and port 5984.
Also in the above curl command i'm using my key with escaped double-quotes like this: ?key=\"Jacob\". The reason for this is that my Linux shell command line eats the double-quotes before passing them to curl, therefore I have to escape them to make it work.

Configure solr to return no results by default

Our Solr is configured to return ALL results if no valid search parameters are passed in. For example:
http://localhost:8983/solr/collection1/select?rows=1&title=bar is a valid search (title is a valid field) and it returns the proper number of results (1 out of many results). But... http://localhost:8983/solr/collection1/select?rows=1&foo=bar returns one out of the entire collection (foo is not a valid field).
I read that there is a way to configure Solr to return NO results by default (instead of all). It said "adjust the requestHandler config to return all results by default" (which I assume means there is a way to return none by default) but I cannot find anything online about how to actually do this.
The reason we want this is because we're implementing a blacklist of fields that we don't want the user to search on, but by doing this, it allows all other fields through and we'd like those to return no results (or even better - an error saying the field is invalid).
Solr is being called through our API that we wrote, so even if we could add on a parameter to each call to make it return no results by default (noResultsIfNoValidSearch=true or something), that would work.
So, any ideas on how to configure Solr to return NO results by default? Thanks!
Add echoParams=all into your query to see everything that the request has, coming from all configuration sources.
Most likely you define q=*:* somewhere in your configuration, that's what causing returning everything. Remove that and you should get nothing.
If you are using eDixMas, you can look into uf parameter which allows to restrict the fields users are allowed to query.
To allow all fields except title, use uf=*-title
The easiest thing that comes to mind is to set the rows parameter to 0 in your API or config, depending on your requirements.

Solr autocomplete keyword and geolocation based

I am looking for getting auto complete suggestions using Solr based on keyword as well as geolocation. Is there a way the 'Suggester' component or any other way, Solr can take in multiple fields for auto completion?
For e.g. if I have a restaurants database and I want to get suggestions using keyword e.g. 'Piz', the results should be based both on the keyword 'Piz' and also the locations that are close to certain latitude, longitude.
Is there a way to do it in Solr ?
Thanks.
you create a handler that:
in the index analyzer chaing, use EdgeNGram to math what the user entered so far
boost results with geodist(): play around with recip() etc until you get desired weight on the location boosting
call that handler and pass current location and user entered chars and you are done

Resources