I want to save small images in my room database and I have two issues:
How do I save an image in my database?
How do I save multiple images in my database?
I tried saving a bitmap in the way recommended by the developers page (https://developer.android.com/training/data-storage/room/defining-data)
#Parcelize
#Entity(tableName = "image_table")
data class ImgMod(
#PrimaryKey(autoGenerate = true)
var invoiceId: Long = 0L,
#ColumnInfo(name = "image")
var single_img: Bitmap?
): Parcelable
However, I receive the following error:
Cannot figure out how to save this field into database. You can consider adding a type converter for it.
Secondly, I would like to save multiple images in one database entry. But I receive the same error with the following snippets:
#ColumnInfo(name = "imageList")
var img_list: ArrayList<BitMap>
or decoded bitmap as a String
#ColumnInfo(name = "imageList")
var decoded_img_list: ArrayList<String>
I am sorry if this is a very basic question. But how do I have to configure the database/process the data to an image list?
Thank you in advance,
rot8
A very simple way to get images into the database (although I personally discourage it) would be to base-64 encode the bitmaps into a String and put it into a row of the database.
Take in account that bitmaps are very memory heavy; and base64 encoding something increases it's size some more so be careful when loading a bunch of images... I do also think Room and SQLite supports binary data as blobs, so you could just declare a column as ByteArray and it should just work.
What I've been doing in my projects is to write them into the internal or external storage of my app and then store the reference Uri as a String to later be able to retrieve the image from disk.
Something that I discourage even more is to stuff more than one value per row; having a list of stuff inside a coulmn in SQL is definetely not a pattern you should follow. Creating a "join" table should be simple enough or simply an extra column you could use to group them by should be easy enough, right?
My system:
Windows 8.1
MATLAB2015a
My issue: When I save a JPG image in a structure array, in this case stiAll{i,y}
fileName = strcat('group_',strGr,'_',strVal,'.jpg');
fileNameStr = char(fileName);
stiAll{i,y} = imread(fileNameStr);
and I try to retrieve the saved image with image(stiAll(i,y)) I get the following error message from MATLAB:
Invalid datatype for Image CData. Numeric or logical
matrix required for image CData.
If I save the image without the {i,y} suffix, so that the image is saved in a normal variable, not in a structure array, I can retrieve the image. However, for my programme I would need to save images in the respective cells of a structure array or something similar.
Any idea how to get this done successfully?
Thanks
J
stiAll{i,y} = imread(fileNameStr); looks like a cellArray. And you try to plot it image(stiAll(i,y)) now as Matrix. Try image(stiAll{i,y})
I think I understand this correctly, but I just want to double check to be sure. Suppose I write a binary file using VBA where the first X number of bytes represents some field, the next x number another field, and so on. Now suppose I read that binary file back into VBA later using a byte array. Is it reasonable to assume that the first x elements in the byte array directly correlate to the first x bytes in the file?
I should have made this clear from the get-go, the format and header of the file isn't all that important, I'm just trying to get more into the nitty-gritty of reading and writing binary files and using byte arrays with vba. I'm getting there, and I appreciate everyone's input.
When writing binary files, write headers to check if the given bytes are really matching your format. Take a look into a wave file with a hex editor (i use HxD) and you will see something like this:
RIFFŽ...WAVEfmt ........D¬...±......dataà€....
RIFF is the header of the container (resource interchange
format)
followed by some bytes for meta information
WAVE is the
header of the actual wave
then some data follows you might want to
interprete
Here are two example binary read/write methods (converted to VB.NET from C#)
Public Shared Sub Write(This As YOUR_TYPE, stream As BinaryWriter)
stream.Write(FILE_IDENTIFIER)
stream.Write(FILE_VERSION)
While True
stream.Write(/*...*/)
End While
End Sub
Public Shared Function read(stream As BinaryReader) As YOUR_TYPE
If Not Enumerable.SequenceEqual(FILE_IDENTIFIER, stream.ReadBytes(FILE_IDENTIFIER.Length)) Then
Throw New FormatException("header mismatch")
End If
If stream.ReadByte() <> FILE_VERSION Then
Throw New NotSupportedException("version mismatch")
End If
Dim result As New YOUR_TYPE()
While True
stream.Read(/*...*/)
End While
Return result
End Function
I am developing a LightSwitch application that generates barcodes (QR images) for tickets. I am calling an encode function that converts text to bitmap.
I just need to save this in an LightSwitch Image field.
I have this:
QRCodeEncoder qrCodeEncoder = new QRCodeEncoder();
EditableImage image = qrCodeEncoder.Encode(data);
I want this:
ticket.QRImage = .....???
I am using this library for the QR
http://www.jeff.wilcox.name/2009/09/quick-read-silverlight-barcodes/
http://www.codeproject.com/Articles/20574/Open-Source-QRCode-Library
You can get the bytes by calling image.GetStream(), and then using one of the standard methods to get the bytes out of the stream (see How to convert an Stream into a byte[] in C#?)
I'm posting this thread because I have some difficulties to deal with pictures in Java. I would like to be able to convert a picture into a byte[] array, and then to be able to do the reverse operation, so I can change the RGB of each pixel, then make a new picture. I want to use this solution because setRGB() and getRGB() of BufferedImage may be too slow for huge pictures (correct me if I'm wrong).
I read some posts here to obtain a byte[] array (such as here) so that each pixel is represented by 3 or 4 cells of the array containing the red, the green and the blue values (with the additional alpha value, when there are 4 cells), which is quite useful and easy to use for me. Here's the code I use to obtain this array (stored in a PixelArray class I've created) :
public PixelArray(BufferedImage image)
{
width = image.getWidth();
height = image.getHeight();
DataBuffer toArray = image.getRaster().getDataBuffer();
array = ((DataBufferByte) toArray).getData();
hasAlphaChannel = image.getAlphaRaster() != null;
}
My big trouble is that I haven't found any efficient method to convert this byte[] array to a new image, if I wanted to transform the picture (for example, remove the blue/green values and only keeping the red one). I tried those solutions :
1) Making a DataBuffer object, then make a SampleModel, to finally create a WritableRaster and then BufferedImage (with additional ColorModel and Hashtable objects). It didn't work because I apparently don't have all the information I need (I have no idea what's the Hashtable for BufferedImage() constructor).
2) Using a ByteArrayInputStream. This didn't work because the byte[] array expected with ByteArrayInputStream has nothing to do with mine : it represents each byte of the file, and not each component of each pixel (with 3-4 bytes for each pixel)...
Could someone help me?
Try this:
private BufferedImage createImageFromBytes(byte[] imageData) {
ByteArrayInputStream bais = new ByteArrayInputStream(imageData);
try {
return ImageIO.read(bais);
} catch (IOException e) {
throw new RuntimeException(e);
}
}
I have tried the approaches mentioned here but for some reason neither of them worked. Using ByteArrayInputStream and ImageIO.read(...) returns null, whereas byte[] array = ((DataBufferByte) image.getRaster().getDataBuffer()).getData(); returns a copy of the image data, not a direct reference to them (see also here).
However, the following worked for me. Let's suppose that the dimensions and the type of the image data are known. Let also byte[] srcbuf be the buffer of the data to be converted into BufferedImage. Then,
Create a blank image, for example
img=new BufferedImage(width, height, BufferedImage.TYPE_3BYTE_BGR);
Convert the data array to Raster and use setData to fill the image, i.e.
img.setData(Raster.createRaster(img.getSampleModel(), new DataBufferByte(srcbuf, srcbuf.length), new Point() ) );
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_3BYTE_BGR);
byte[] array = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
System.arraycopy(pixelArray, 0, array, 0, array.length);
This method does tend to get out of sync when you try to use the Graphics object of the resulting image. If you need to draw on top of your image, construct a second image (which can be persistant, i.e. not constructed every time but re-used) and drawImage the first one onto it.
Several people upvoted the comment that the accepted answer is wrong.
If the accepted answer isn't working, it may be because Image.IO doesn't have support for the type of image you're trying, for example tiff images.
To make it work, you need to add an extra jar to handle the image type.
You can add jai-imageio-core-1.3.1.jar to your classpath with:
<!-- https://mvnrepository.com/artifact/com.github.jai-imageio/jai-imageio-core -->
<dependency>
<groupId>com.github.jai-imageio</groupId>
<artifactId>jai-imageio-core</artifactId>
<version>1.3.1</version>
</dependency>
To add support for:
wbmp
bmp
pcx
pnm
raw
tiff
gif (write)
You can check the list of supported formats with:
for(String format : ImageIO.getReaderFormatNames())
System.out.println(format);
Note that you only have to drop the jar (jai-imageio-core-1.3.1.jar for example) into your classpath to make it work.
Other projects that add additional support for image types include:
https://github.com/haraldk/TwelveMonkeys
https://github.com/geosolutions-it/imageio-ext
The approach by using ImageIO.read directly is not right in some cases. In my case, the raw byte[] doesn't contain any information about the width and height and format of the image. By only using ImageIO.read, It is impossible for the program to construct a valid image.
It is necessary to pass the basic information of the image to BufferedImage object:
BufferedImage outBufImg = new BufferedImage(width, height, bufferedImage.TYPE_3BYTE_BGR);
Then set the data for the BufferedImage object by using setRGB or setData. (When using setRGB, it seems we must convert byte[] to int[] first. As a result, it may cause performance issues if the source image data is big. Maybe setData is a better idea for big byte[] typed source data.)