react-chartjs: difference between `scales.x`/`xAxes`/`xAxes[]` - reactjs

I'm using react-chartjs-2 with Typescript.
I'm very confused with the interface of chartjs (perhaps this is due to severe API changes beneath versions and information running around online without clearly stating the version).
What is the difference between the following options:
options.scales.x: {}
options.scales.xAxes: {}
I thought this was equal to the above, but under certain circumstances I could not get options.scales.xAxes.min working. So I resorted to using x.
options.scales.xAxes: [{}]
I see many examples using this syntax (especially here on SO). However, using it myself results in a type error.

options.scales.xAxes: [{}] is V2 syntax, here all the x axes are grouped in a single array, same for all the y axes.
In v3 all the scales are their own object within the scales object where the key of the object is your scale ID.
by default you should use options.scales.x to configure the default x axis. But to make things a bit easyer chart.js looks at the fist letter of the object to determine its type so if you pass options.scales.xAxes it should result in the same if you dont have any other scales configured

Related

How to get helpful intellisense when using a type alias for a string union type as a property of an object or interface?

At work, I'm involved in maintaining our React component library, which we're very slowly in the process of converting to TypeScript. (But my question is about Typescript more generally and not specific to React - that's just how the question has arisen and how I imagine it might arise for many others!)
Like many such component libraries, we often use a union of string literals as the type for some props, to ensure they always have one of a small number of approved values - a typical example might be type Size = "small" | "medium" | "large";. And while we don't always use a type alias for them as shown there, it is frequent enough that we do, as the type may need to be referred to in a few different places, and it's nice (particularly with larger unions than this) to not have to type out the same thing all the time, as well as knowing that if the design team ever want us to add a new extra-large size or whatever, the type at least will only have to be updated in one place.
This works fine of course, but we've discovered that in terms of intellisense it leaves quite a lot to be desired. We want our consumers, who likely don't know all the valid values off my heart, to be told what these are by their IDE when rendering the component. But by doing this in the most obvious way - ie like this:
type Size = "small" | "medium" | "large";
interface Props {
size: Size;
}
then when consuming the props in any form, IDEs such as VSCode on hovering over the size prop will simply display the rather unhelpful Size as the prop's type, rather than the explicit union of 3 strings which would be far more helpful.
(Note that although the above is using an interface rather than a type alias, which is the way we've decided to go after some debate, the same issue is present when using type for
the props type.)
This seems like it should be a common question, but many searches on google and Stack Overflow have failed to turn anything up that's specific to simple unions of strings.
There are discussions about ways to get TS to "expand" or "simplify" complex types - like this Stack Overflow question and its great answers, or this article, which basically show the same solution although they differ in details. But these seem to be aimed at - and certainly work on - object types specifically. Sadly, no matter which of these transformations is applied to the type of the size prop in the above example, TypeScript still stubbornly shows the unhelpful Size as the prop's type when actually consuming it. (For those for whom this makes sense - Haskell is among my favourite languages - I would phrase this more succinctly as: these solutions appear to work on product types but not on sum types.)
This is demonstrated in this TS playground example - specifically the size2 prop. (This shows only one form of the Expand type, but I've tried every slight variation I've either found online or have come up with myself, with no success.)
The others - size3 and size4 - are attempts at using template literal types based on the same "trick" that is behind the Expand example. I understand they're based on using a conditional type to force distribution of the operation across a union and then making sure the operation is essentially a no-op, so the actual type is still the same but after hopefully forcing TS to compute something across the union to output a "new", "plain" union. Since the Expand type suggested above iterates across keys, fine for an object type with properties but unclear if it has any meaning for a union of string literals, it seemed using a template literal as the operation in this trick was the same idea but adapted to such a union of literal string types. And specifically, concatenating with an empty string is the only obvious way to make this keep the strings as they were.
That is, I thought
type NoOpTemplate<T> = T extends string ? `${T}${""}` : T;
might work. But as you will see from the playground link above (size3 prop), it doesn't.
I even hit upon the idea of using a generic type with two parameters:
type TemplateWithTwoParams<T, U extends string> = T extends string ? `${T}${U}` : T;
This "works" in a sense, because the prop defined as notQuiteSize: TemplateWithTwoParams<Size, "x">; displays as an explicit union as desired: "smallx" | "mediumx" | "largex". So surely supplying an empty string as the U parameter will do what we want here - keep the strings in the union the same, while forcing explicit display of the options?
Well no, it doesn't, as you can see from the size4 prop in the example! It seems the TS compiler is just too clever, as all I can assume is that it spots that in this case we're just concatenating with an empty string, which is a no-op, and therefore doesn't need to actually compute anything and thus outputs the same type T as it was given - even via the type alias.
I'm out of ideas now, and surprised this doesn't seem to be a common problem with clever solutions like the above Expand that I can read about online. So I'm asking about it now! How can we force TS to display a union type as an explicit union, when used as part of an object or interface, while still keeping the convenience of using an alias for it?
It turns out that I was thinking along the right lines, I just needed a different "do-nothing" operation to apply to each member of the union.
Specifically, wrapping the value in an object, and then extracting it, like this:
type WrapInObject<T> = T extends any ? { key: T } : never;
type Unwrap<T> = T extends { key: any } ? T["key"] : never;
type ExplicitUnion<T> = Unwrap<WrapInObject<T>>;
works as intended. Here's the TS playground example from the question expanded with this version - as prop sizeThatWorks, where if you hover to see the intellisense you will see the desired output of the explicit union, rather than the Size alias.
This has a further advantage of presumably working for any union, not just one of string literals. I wish I knew this would continue to work in future versions of the TS compiler - I hate to think that future versions may be "clever" enough to realise that this wrapping-and-unwrapping is a no-op, as apparently happened with my attempts using template literal types. But this seems to be the best available, at least at present.

Why gdal.open(path).ReadAsArray() produce different results from tiffile.imread(path)

Since colab won't allow me to use tiffile.imread() by giving error 'ValueError: <COMPRESSION.LZW: 5> requires the 'imagecodecs' package', I use gdal.open().ReadAsArray() to read tif file and generate input data for model to inference. This results in:
When I use tiffile.imread() to read the same tif.file in another platform and created input with the same procedures as above. The model prediction finally goes to:
The results from the second image makes sense for the classes I want to predict. I just want to the reason that caused this difference. It seems gdal changed the order of pixels?
This finally works for me. Since I cannot use tiffile.imread() in colab, I tried gdal and rasterio, but the image shape would be channel first. The problem for the first figure is caused by
np.reshape()
which did not change the axis of data dimension.
Then I used codes below and the issue is addressed.
with rio.open("gdrive/My Drive/file.tif") as ds:
arr=ds.read()
np.moveaxis(arr, 0, -1)

Azure Maps - Unable to set layer options 'offset' from point properties without browser warnings

I'm trying to use the offset parameters for the text label in relation to the position of the point (marker) on the map.
In my app, the user sets their preference to the x & y-axis offset values, which when changed, update the map point properties. I then need to use data-driven expressions to pull the values from each map point properties when updating the point layer options.
When the point is first created, the offset property of the point is set as below:
offset: [0, 0],
When updating the point layer, I've tried using the expression formulas below, but none of them are working without getting warnings in the browser debug console.
layers.pointLayer.setOptions({
iconOptions: {
offset: [ // ******NEEDS FIXING*******
'case', // Use a conditional case expression.
['has', 'offset'], // Check to see if feature has an "offset" property
//["length", ["array", ["get", "offset"]]] // not working
['get', 'offset'], // not working without browser warning
//['get', ['literal', [0, 0]]], // not working
//['literal', [0,0]], // working but not relevant, set locally, not pulling value from properties!!!
'literal', [0, 0]] // If it doesn't, default to array [0,0] (x & y-axis).
]
}
})
If using the example ['get', 'offset'], in the expression, although I can actually modify the offset and it works on the map as shown in the screenshot, I get the following warning in the browser debug console:
I'd like to have a warning free environment as any debug warnings. I obviously need to get the formatting set correctly in the data driven expression when setting the options for the layer (2nd code sample) but none of the syntax I've tried so far are working correctly.
I also tried studying the MS example here, but it seems they don't actually pull the offset values from the map point properties, they are setting the layer options directly from the user form which is no good unless i wanted to implement a global change for all points that belong to this particular layer.

React places autocomplete multiples types issue (Nextjs) [duplicate]

The input box is initialized properly but it is not generating any suggestions. Can any one point out what I am doing wrong? Here is the code.
Update
I have investigated the issue. The problem is in the line:
types: ['(cities)', '(regions)']
when I specify only one type types: ['(cities)'] no matter region or cities it works. But two types are not working together. Although the documentation clearly says that types are Array of strings and valid values are 'establishment', 'geocode', '(regions)' and '(cities)'
As mentioned in the documentation:
"types, which can either specify one of two explicit types or one of two type collections."
This means that the types array only supports one parameter.
If you think it would be a useful feature to support more than one parameter or a mixture of explicit types and collections, please file a Places API - Feature Request.
As Chris mentioned, there is currently no way to get results for multiple place types.
However, you can call the API twice with different place types and populate a map (or whatever you're building) with both types. For example:
var request1 = {
location: event.latLng,
radius: 8047,
types: ['cafe'],
};
var request2 = {
location: event.latLng,
radius: 8047,
types: ['library'],
};
There are no results of type (cities)' or '(regions)'. If I change the types array to ['establishment'], I get results.
Working example
If you use only one type, it works:
Example

WebGL: Which array arguments must be typed arrays?

I noticed that the "mouse events" (and "textures") demo here runs in Chrome, Firefox and Opera (the interface is a little bit bad, so bear with it).
As you can see, the Model, View and Projection matrices are being supplied as vanilla JavaScript arrays. Float32Array only appears once in the 2 scripts, and that is for uploading cube vertex data.
There's something I don't understand about this, because I've thus far assumed all data must go up as typed arrays. I see these options:
All arrays DO have to go to calls as typed arrays, yet conversions are implicit.
Only certain calls required typed arrays as input. If so, which do/don't? Where can I review this, as WebGL doesn't seem to have official API docs yet!
There are discrepancies between how different browser implementations handle this: Some may do implicit array conversion, while others may not.
The WebGL specification has been available for some time. You can get it from the WebGL Khronos Site. As you can see from the spec, several functions are overloaded, in particular those accepting uniforms (which is how you're specifying the various matrices you mention), to accept both JavaScript arrays and typed arrays. Other functions—mostly those taking larger amounts of data (e.g., textures, vertex arrays, etc.)—are limited to using only typed arrays for performance reasons.

Resources