According to the documentation (https://developer.garmin.com/connect-iq/programmers-guide/resource-compiler/) the resource compiler supports GIF as Bitmap. However, when I display a GIF file, I just get a still picture, and the GIF doesn't move.
The GIF I have been testing with is this: http://bestanimations.com/Animals/Mammals/Cats/cats/cute-kitty-animated-gif-2.gif
and I have saved the gif in the drawables folder (I use the ConnectIQ-plugin for Eclipse).
I have tried to include the Bitmap in the layouts resources as:
<layout id="MainLayout">
<bitmap id="MotivatorCat" x="center" y="center" filename="../drawables/motivatorcat.gif"/>
</layout>
and I have tried to include it in the drawables resources as:
<drawables>
<bitmap id="MotivatorCat" filename="motivatorcat.gif" />
</drawables>
and then loading it in initialize() by:
catgif = Ui.loadResource(Rez.Drawables.MotivatorCat);
and drawing it in onUpdate():
dc.drawBitmap(50, 50, catgif);
But nothing works.
What am I doing wrong?
The Connect IQ does not currently (as of SDK 2.1.x) support rendering of animated GIF images.
Related
So the simple idea is that I have SVG data as a string that I fetch from Internet.
I would like to show that as an icon in my app. Is there any way I can do this?
I have seen countless examples where SVG data is in a file located in the app's directory that is then showed but this is not what I am looking for. I literally have the data in XML format after http request and I only need to transform that to a Image or something else visible on the screen.
I have been trying to find a solution to this for hours now, so I would really appreciate some help :S
Android doesn't support svg with ImageView directly,you could display SVG with some commonly used third-party controls.
Like FFImageLoading.
Add Xamarin.FFImageLoading.Svg nuget package to platform project. Use ImageService with svg data resolver for displaying svg images.
For Android:
<ImageView
android:id="#+id/image_view"
android:layout_width="match_parent"
android:layout_height="match_parent"/>
then use like:
var svgString = #"<svg><rect width=""30"" height=""30"" style=""fill:blue"" /></svg>";
ImageView imageView = FindViewById<ImageView>(Resource.Id.image_view);
ImageService.Instance
.LoadString(svgString)
.LoadingPlaceholder(placeHolderPath)
.WithCustomDataResolver(new SvgDataResolver(64, 0, true))
.WithCustomLoadingPlaceholderDataResolver(new SvgDataResolver(64, 0, true))
.Into(imageView);
I am trying to do something I feel is very simple, yet seems that I am clearly misunderstanding a crucial piece of the mapbox addlayer feature.
The Goal
Create dynamically identified icons, based on a features data value (e.g. geojson feature data vale title: "walmart"). Essentially just adding dynamic store icons from the sprite image when those locations are queried via tilequery. picture representation here
The problem
I keep getting an error when trying to use the sprite values from the style. Error: util.js:349 Image "airport-11" could not be loaded. Please make sure you have added the image with map.addImage() or a "sprite" property in your style. You can provide missing images by listening for the "styleimagemissing" map event.
I see tons of resources talking about sprites, but none discuss how to exactly implement them in this fashion. I have even tried querying the sprite and then adding the values using dot notation to access sprite values. This gives an error of "undefined" and invalid value.
Example code:
map.addLayer({
id: "tilequery-points",
type: "symbol",
source: "tilequery", // Set the layer source
layout: {
"icon-image": [
"match",
["get", "title"],
["HEB"],
"H-E-B_logo",
["Pilot Flying j"],
sprite.Pilot_Travel_Centers_logo,
// "Pilot_Travel_Centers_logo",
["Dollar General"],
"Dollar_General_logo",
["Cumberland Farms Corp"],
"Cumberland_Farms_logo (1)",
["CEFCO"],
"CEFCO-convenience-stores-Logo_510px",
["BJs Wholesale Inc"],
The Question
How do I access the sprite values and not get an error?!!!
Thanks for the help! I Wouldn't ask if I didn't need it!
UPDATE
I have figured out that to use sprite images inside of any layer, the images will automatically be available if you have them in your Mapbox studio sprite image collection. The confusion was that previously, I was not able to use them from link. However, it should work automatically.
Hope it helps!
It's true the documentation about sprites is not super clear. I'll try to summarise (simplifying a bit).
A Mapbox GL style has one sprite. That's a PNG containing all the icons, plus a JSON file specifying what each icon is called (its icon ID), and where it located within the PNG. The sprite is specified by giving a URL as the sprite property: https://docs.mapbox.com/mapbox-gl-js/style-spec/sprite/
You can also add images to the sprite dynamically after the map loads, with map.loadImage and map.addImage, specifying the icon ID.
To display an icon, you use that same ID in a symbol layer: "icon-image": "myicon".
You can run into trouble when you try to combine your own icons with those in a Mapbox basemap (which are Maki icons with names like `airport-11').
To combine them, you can do one of these three things:
upload your icons to a style in Mapbox Studio
load your icons dynamically
generate a new sprite sheet offline, using something like mbsprite
I don't know what you meant about "dot notation", but no, that's not the right path.
Currently, I can only use jpeg & png exports like:
stageRef.current?.getStage().toDataURL({ mimeType: 'image/jpeg', quality: 1 })
stageRef.current?.getStage().toDataURL({ mimeType: 'image/png', quality: 1 })
I want to export Canvas to svg as well as pdf like Figma does it.
I found out about Data URIs which led to MIME_types. In there, they have written application/pdf & image/svg+xml should work but when I do that I still get a .png image.
Is there any way to achieve .svg & .pdf from Canvas in Konva?
stage.toDataURL() is using canvas.toDataURL() API to do the export. Most of the browsers support only jpg and png formats.
For SVG or PDF exports you have to write your own implementation.
For PDF exports you may use external libraries to generate a PDF file with an image from stage.toDataURL() method. As a demo take a look into Saving Konva stage to PDF demo.
There are no built-in methods for SVG exports in Konva library. You have to write your own implementation. If you use basic shapes such as Rect, Circle and Text without any fancy filters, writing such conversions shouldn't be hard, because there are similar tags in SVG spec.
toDataUrl() exports a bitmap, rather than a vector.
You can generate an svg by using the canvas2svg package.
You can set your Layer's context equal to a c2s instance, rendering it, and resetting your Layer's ref to what it was previously, as shown here.
I know there are several options where to store and how to serve images in react.
two options are:
store the image in public folder and then use it like this:
<img src={process.env.PUBLIC_URL + '/someImage.jpg'} />
store the image somewhere else in the project, import it and use it like this:
import image from '../../assets/images/someImage.jpg'
<img src={image} />
I need now to create an img tag with srcSet with few images that the correct size will be shown according to screen width.
In order to avoid adding many lines of code (4 or 5 sizes for each image) for importing the images as I do today (I am using the second method as described above), I think maybe to move them to the public folder then I wont need to import many versions and also mention all those versions in the srcSet.
My question is
what will be the differences between serving images using the first vs the second methods I wrote above?
and which option will you work with?
in case you have many images and want to set srcSet with few versions to each image?
I am trying to get a video to play inside a video tag at the top left hand corner of my page, it loads ok, the resolution is good and it seems to be looping but it is lagging very much, definatly not achieving 60fps it is in mp4 format and the resolution on the original mp4 is 1920x1080 it is a hi resolution vj free loop called GlassVein, you can see it if you search on youtube. On right clicking properties it comes up with the following inforamtion;
Bitrate:127kbs
Data rate:11270kbps
Total bitrate:11398kbs
Audio sample rate is: 44khz
filetype is:VLC media file(.mp4)
(but i do not want or need the audio)
& it also says 30fps, but I'm not sure i believe this as it runs smooth as butter on vlc media player no lagging, just smooth loop animation
I have searched on :https://trac.ffmpeg.org/wiki/Encode/AAC for encoding information but it is complete gobbldygook to me, I don't understand a word its saying
My code is so far as follows;
<video src="GlassVeinColorful.mp4" autoplay="1" preload="auto"
-movflags class="Vid" width="640" height="360" loop="1" viewport=""
faststart mpeg4 -s 320x240 -r 1080 -b 128k>
</video>
Does anyone know why this is lagging so much, or what I could do about it.
it is a quality animation and I don't really want to loose an of its resolution or crispness.. the -s section was originally set to 1920x1080 as this is what the original file is but i have changed it to try and render it quicker...
Any helpful sites, articles or answers would be great..
2020 Update
The Solution to this problem was to convert the Video to WebM, then use Javascript & a Html5 Canvas Element to render the Video to the page instead of using the video tag to embed the video.
Html
<section id="Theater">
<video src="Imgs/Vid/PurpGlassVein.webm" type="video/webm"
width="684" height="auto"
muted loop autoplay>
<source>
<source>
<source>
</video>
<canvas style="filter:opacity(0);"></canvas>
</section><!-- Closing Section for the Header -->
Css
video{
display:none !important;
visibility:hidden;
}
Javascript
const Canv = document.querySelector("canvas");
const Video = document.querySelector("video");
const Ctx = Canv.getContext("2d");
Video.addEventListener('play',()=>{
function step() {
Ctx.drawImage(Video, 0, 0, Canv.width, Canv.height)
requestAnimationFrame(step)
}
requestAnimationFrame(step);
})
Canv.animate({
filter: ['opacity(0) blur(5.28px)','opacity(1) blur(8.20px)']
},{
duration: 7288,
fill: 'forwards',
easing: 'ease-in',
iterations: 1,
delay: 728
})
I've Also Used the Vanilla Javascript .animate() API to fade the element into the page when the page loads. But one Caveat is that both the Canvas and the off-screen Video Tag must match the original videos resolution otherwise it starts to lag again, however you can use Css to scale it down via transform:scale(0.5); which doesn't seem to effect performance at all.
runs smooth as butter, and doesn't loose any of the high resolution image.
Added a slight blur 0.34px onto it aswell to smooth it even more.
Possibly could of still used ffmpeg to get a better[Smaller File Size] WebM Output file but thats something I'll have to look into at a later date.
Video over IP connections is going to be subject to network conditions and 60fps at that resolution is a quite high quality to try to maintain without any delay or buffering.
Most 'serious' video services, including YouTube. NetFlix etc provide multiple bit rate streams to allow for different network conditions, and different device capabilities.
The clients can switch between the streams through the video as they download the video chunk by chunk so can choose the best resolution possible for the current network conditions when they request a new chunk.
See here for an example: https://stackoverflow.com/a/42365034/334402
I recently went back to this project,
and went back over the Code,
Found that Converting the Video to WebM
& Using the html Canvas element to display the Vj loop
has made the performance 10x better, I will Upload the code for writing the data to the canvas when I can find it, my projects folder is kinda messy and un organised.
The main Idea though is having an Offscreen canvas with display none, and then reading that data into another Canvas that is displayed on the screen.
Seems to have fixed the issue that I was facing.
See the above edit[in the question] if you are facing any of the same issues or problems.