How to change the size of an emoji? | Discord.js - discord.js

Let me be more specific here. I'm making a Discord bot, and I made an enlarge command that basically enlarges any emoji provided (Basically, have it look bigger). So far, I've had it send the emoji as an image, but it's too small, and I want it to be bigger, so my question is, what do I have to add to change the size, or is there no way to do so. I'll leave the code that sends it below in case someone may need it. Thank you in advance.
const parsedEmoji = Util.parseEmoji(emoji)
if(parsedEmoji.id) {
const extension = parsedEmoji.animated ? ".gif" : ".png"
const url = `https://cdn.discordapp.com/emojis/${parsedEmoji.id + extension}`
}

From the developer docs: Just add the ?size=<number> querystring to change the size of the emoji. Example:
https://cdn.discordapp.com/emojis/${parsedEmoji.id + extension}?size=256
The size can be any power of two between 16 and 4096, so all the valid sizes are 16, 32, 64, 128, 256, 512, 1024, 2048, and 4096.

Related

read cloud storage content with "gzip" encoding for "application/octet-stream" type content

We're using "Google Cloud Storage Client Library" for app engine, with simply "GcsFileOptions.Builder.contentEncoding("gzip")" at file creation time, we got the following problem when reading the file:
com.google.appengine.tools.cloudstorage.NonRetriableException: java.lang.RuntimeException: com.google.appengine.tools.cloudstorage.SimpleGcsInputChannelImpl$1#1c07d21: Unexpected cause of ExecutionException
at com.google.appengine.tools.cloudstorage.RetryHelper.doRetry(RetryHelper.java:87)
at com.google.appengine.tools.cloudstorage.RetryHelper.runWithRetries(RetryHelper.java:129)
at com.google.appengine.tools.cloudstorage.RetryHelper.runWithRetries(RetryHelper.java:123)
at com.google.appengine.tools.cloudstorage.SimpleGcsInputChannelImpl.read(SimpleGcsInputChannelImpl.java:81)
...
Caused by: java.lang.RuntimeException: com.google.appengine.tools.cloudstorage.SimpleGcsInputChannelImpl$1#1c07d21: Unexpected cause of ExecutionException
at com.google.appengine.tools.cloudstorage.SimpleGcsInputChannelImpl$1.call(SimpleGcsInputChannelImpl.java:101)
at com.google.appengine.tools.cloudstorage.SimpleGcsInputChannelImpl$1.call(SimpleGcsInputChannelImpl.java:81)
at com.google.appengine.tools.cloudstorage.RetryHelper.doRetry(RetryHelper.java:75)
... 56 more
Caused by: java.lang.IllegalStateException: com.google.appengine.tools.cloudstorage.oauth.OauthRawGcsService$2#1d8c25d: got 46483 > wanted 19823
at com.google.common.base.Preconditions.checkState(Preconditions.java:177)
at com.google.appengine.tools.cloudstorage.oauth.OauthRawGcsService$2.wrap(OauthRawGcsService.java:418)
at com.google.appengine.tools.cloudstorage.oauth.OauthRawGcsService$2.wrap(OauthRawGcsService.java:398)
at com.google.appengine.api.utils.FutureWrapper.wrapAndCache(FutureWrapper.java:53)
at com.google.appengine.api.utils.FutureWrapper.get(FutureWrapper.java:90)
at com.google.appengine.tools.cloudstorage.SimpleGcsInputChannelImpl$1.call(SimpleGcsInputChannelImpl.java:86)
... 58 more
What else should be added to read files with "gzip" compression to be able to read the content in app engine? ( curl cloud storage URL from client side works fine for both compressed and uncompressed file )
This is the code that works for uncompressed object:
byte[] blobContent = new byte[0];
try
{
GcsFileMetadata metaData = gcsService.getMetadata(fileName);
int fileSize = (int) metaData.getLength();
final int chunkSize = BlobstoreService.MAX_BLOB_FETCH_SIZE;
LOG.info("content encoding: " + metaData.getOptions().getContentEncoding()); // "gzip" here
LOG.info("input size " + fileSize); // the size is obviously the compressed size!
for (long offset = 0; offset < fileSize;)
{
if (offset != 0)
{
LOG.info("Handling extra size for " + filePath + " at " + offset);
}
final int size = Math.min(chunkSize, fileSize);
ByteBuffer result = ByteBuffer.allocate(size);
GcsInputChannel readChannel = gcsService.openReadChannel(fileName, offset);
try
{
readChannel.read(result); <<<< here the exception was thrown
}
finally
{
......
It is now compressed by:
GcsFilename filename = new GcsFilename(bucketName, filePath);
GcsFileOptions.Builder builder = new GcsFileOptions.Builder().mimeType(image_type);
builder = builder.contentEncoding("gzip");
GcsOutputChannel writeChannel = gcsService.createOrReplace(filename, builder.build());
ByteArrayOutputStream byteStream = new ByteArrayOutputStream(blob_content.length);
try
{
GZIPOutputStream zipStream = new GZIPOutputStream(byteStream);
try
{
zipStream.write(blob_content);
}
finally
{
zipStream.close();
}
}
finally
{
byteStream.close();
}
byte[] compressedData = byteStream.toByteArray();
writeChannel.write(ByteBuffer.wrap(compressedData));
the blob_content is compressed from 46483 bytes to 19823 bytes.
I think it is the google code's bug
https://code.google.com/p/appengine-gcs-client/source/browse/trunk/java/src/main/java/com/google/appengine/tools/cloudstorage/oauth/OauthRawGcsService.java, L418:
Preconditions.checkState(content.length <= want, "%s: got %s > wanted %s", this, content.length, want);
the HTTPResponse has decoded the blob, so the Precondition is wrong here.
If I good understand you have to set mineType:
GcsFileOptions options = new GcsFileOptions.Builder().mimeType("text/html")
Google Cloud Storage does not compress or decompress objects:
https://developers.google.com/storage/docs/reference-headers?csw=1#contentencoding
I hope that's what you want to do .
Looking at your code it seems like there is a mismatch between what is stored and what is read. The documentation specifies that compression is not done for you (https://developers.google.com/storage/docs/reference-headers?csw=1#contentencoding). You will need to do the actual compression manually.
Also if you look at the implementation of the class that throws the exception (https://code.google.com/p/appengine-gcs-client/source/browse/trunk/java/src/main/java/com/google/appengine/tools/cloudstorage/oauth/OauthRawGcsService.java?r=81&spec=svn134) you will notice that you get the original contents back but you're actually expecting compressed content. Check the method readObjectAsync in the above mentioned class.
It looks like the content persisted might not be gzipped or the content-length is not set properly. What you should do is verify length of the compressed stream just before writing it into the channel. You should also verify that the content length is set correctly when doing the http request. It would be useful to see the actual http request headers and make sure that content length header matches the actual content length in the http response.
Also it looks like contentEncoding could be set incorrectly. Try using:.contentEncoding("Content-Encoding: gzip") as used in this TCK test. Although still the best thing to do is inspect the HTTP request and response. You can use wireshark to do that easily.
Also you need to make sure that GCSOutputChannel is closed as that's when the file is finalized.
Hope this puts you on the right track. To gzip your contents you can use java GZIPInputStream.
I'm seeing the same issue, easily reproducable by uploading a file with "gsutil cp -Z", then trying to open it with the following
ByteArrayOutputStream output = new ByteArrayOutputStream();
try (GcsInputChannel readChannel = svc.openReadChannel(filename, 0)) {
try (InputStream input = Channels.newInputStream(readChannel))
{
IOUtils.copy(input, output);
}
}
This causes an exception like this:
java.lang.IllegalStateException:
....oauth.OauthRawGcsService$2#1883798: got 64303 > wanted 4096
at ....Preconditions.checkState(Preconditions.java:199)
at ....oauth.OauthRawGcsService$2.wrap(OauthRawGcsService.java:519)
at ....oauth.OauthRawGcsService$2.wrap(OauthRawGcsService.java:499)
The only work around I've found is to read the entire file into memory using readChannel.read:
int fileSize = 64303;
ByteBuffer result = ByteBuffer.allocate(fileSize);
try (GcsInputChannel readChannel = gcs.openReadChannel(new GcsFilename("mybucket", "mygzippedfile.xml"), 0)) {
readChannel.read(result);
}
Unfortunately, this only works if the size of the bytebuffer is greater or equal to the uncompressed size of the file, which is not possible to get via the api.
I've also posted my comment to an issue registered with google: https://code.google.com/p/googleappengine/issues/detail?id=10445
This is my function for reading compressed gzip files
public byte[] getUpdate(String fileName) throws IOException
{
GcsFilename fileNameObj = new GcsFilename(defaultBucketName, fileName);
try (GcsInputChannel readChannel = gcsService.openReadChannel(fileNameObj, 0))
{
maxSizeBuffer.clear();
readChannel.read(maxSizeBuffer);
}
byte[] result = maxSizeBuffer.array();
return result;
}
The core is that you cannot use the size of the saved file cause Google Storage will give it to you with the original size, so it checks the sizes you expected and the real size and these are differents:
Preconditions.checkState(content.length <= want, "%s: got %s > wanted
%s", this, content.length, want);
So i solved it allocating the biggest amount possible for these files using BlobstoreService.MAX_BLOB_FETCH_SIZE. Actually maxSizeBuffer is only allocated once outsize of the function
ByteBuffer maxSizeBuffer = ByteBuffer.allocate(BlobstoreService.MAX_BLOB_FETCH_SIZE);
And with maxSizeBuffer.clear(); all data is flushed again.

Reproject by GDAL GDALAutoCreateWarpedVRT in C

I want to re-project an HDF from UTM(WGS84) to sinusoidal(WGS84), so I try to use GDALAutoCreateWarpedVRT to finished it. The code is below:
hSrcDS = (GDALDataset*)GDALOpen("HJ1ACCD1.hdf", GA_ReadOnly);
const char *pszSrcWKT = NULL;
char* pszDstWKT = NULL;
//pszSrcWKT = ProjectionStr;
pszSrcWKT=GDALGetProjectionRef(hSrcDS);
CPLAssert( pszSrcWKT != NULL &&strlen(pszSrcWKT) > 0 );
OGRSpatialReference oSRS;
oSRS.SetSinusoidal(0,0,0);
oSRS.SetWellKnownGeogCS("WGS84");
oSRS.exportToWkt(&pszDstWKT );
GDALWarpOptions*psWarpOptions = GDALCreateWarpOptions();
psWarpOptions->dfWarpMemoryLimit=500*1024*1024;
hDstDS=(GDALDataset*)(GDALDataset*)GDALAutoCreateWarpedVRT(hSrcDS,pszSrcWKT,pszDstWKT,GRA_Bilinear ,20,psWarpOptions);
GDALDriver *poDriverTiff;
poDriverTiff=GetGDALDriverManager()->GetDriverByName("GTIFF");
poDriverTiff->CreateCopy("d:\\toto.tif",(GDALDataset*)hDstDS,false,NULL,NULL,NULL);
When I set oSRS.SetSinusoidal(0,0,0),the result seems good, but the resolution is doubled (from 30 to 60). It's so weird.
According to the API docs for GDALAutoCreateWarpedVRT:
The GDALSuggestedWarpOutput() function is used to determine the bounds and resolution of the output virtual file which should be large enough to include all the input image
There is also a GDALSuggestedWarpOutput2() function to help suggest output file size for a similar set of requirements.

Image Analysis Program Based on Hashcode Method Resulting in Errors

I am trying to write a program that will recognize an image on the screen, compare it against a resource library, and then calculate based on the result of the image source.
The first thing that I did was to create the capture screen function which looks like this:
private Bitmap Screenshot()
{
System.Drawing.Bitmap Table = new System.Drawing.Bitmap(88, 40, PixelFormat.Format32bppArgb);
System.Drawing.Graphics g = System.Drawing.Graphics.FromImage(RouletteTable);
g.CopyFromScreen(1047, 44, 0, 0, Screen.PrimaryScreen.Bounds.Size);
return Table;
}
Then, I analyze this picture. The first method I used was to create two for loops and analyze both the bitmaps pixel by pixel. The problem with this method was time, it took a long time to complete 37 times. I looked around and found the convert to bytes and the convert to hash methods. This is the result:
public enum CompareResult
{
ciCompareOk,
ciPixelMismatch,
ciSizeMismatch
};
public CompareResult Compare(Bitmap bmp1, Bitmap bmp2)
{
CompareResult cr = CompareResult.ciCompareOk;
//Test to see if we have the same size of image
if (bmp1.Size != bmp2.Size)
{
cr = CompareResult.ciSizeMismatch;
}
else
{
//Convert each image to a byte array
System.Drawing.ImageConverter ic = new System.Drawing.ImageConverter();
byte[] btImage1 = new byte[1];
btImage1 = (byte[])ic.ConvertTo(bmp1, btImage1.GetType());
byte[] btImage2 = new byte[1];
btImage2 = (byte[])ic.ConvertTo(bmp2, btImage2.GetType());
//Compute a hash for each image
SHA256Managed shaM = new SHA256Managed();
byte[] hash1 = shaM.ComputeHash(btImage1);
byte[] hash2 = shaM.ComputeHash(btImage2);
for (int i = 0; i < hash1.Length && i < hash2.Length&& cr == CompareResult.ciCompareOk; i++)
{
if (hash1[i] != hash2[i])
cr = CompareResult.ciPixelMismatch;
}
}
return cr;
}
After I analyze the two bitmaps in this function, I call it in my main form with the following:
Bitmap Table = Screenshot();
CompareResult success0 = Compare(Properties.Resources.Result0, Table);
if (success0 == CompareResult.ciCompareOk)
{ double result = 0; Num.Text = result.ToString(); goto end; }
The problem I am getting is that once this has all been accomplished, I am always getting a cr value of ciPixelMismatch. I cannot get the images to match, even though the images are identical.
To give you a bit more background on the two bitmaps, they are approximately 88 by 40 pixels, and located at 1047, 44 on the screen. I wrote a part of the program to automatically take a picture of that area so I did not have to worry about the wrong location or size being captured:
Table.Save("table.bmp");
After I took the picture and saved it, I moved it from the bin folder in the project directly to the resource folder and ran the program again. Despite all of this, the result is still ciPixelMismatch. I believe the problem lies within the format that the pictures are being saved as. I believe that despite them being the same image, they are being analyzed in different formats, maybe one of the pictures contains a bit more information than the other which is causing the mismatch. Can somebody please help me solve this problem? I am just beginning with my c# programming, I am 5 days into the learning process, and I am really at a loss for this.
Yours sincerely,
Samuel

How can I adjust contrast in OpenCV in C?

I'm just trying to adjust contrast/ brightness in an image in gray scale to highlight whites in that image with Opencv in C. How can I do that? is there any function that makes this task in opencv?
Original image:
Modified image:
Thanks in advance!
I think you can adjust contrast here in two ways:
1) Histogram Equalization :
But when i tried this with your image, result was not as you expected. Check it below:
2) Thresholding :
Here, i compared each pixel value of input with an arbitrary value ( which i took 127). Below is the logic which has inbuilt function in opencv. But remember, output is Binary image, not grayscale as you did.
If (input pixel value >= 127):
ouput pixel value = 255
else:
output pixel value = 0
And below is the result i got :
For this, you can use Threshold function or compare function
3) If you are compulsory to get grayscale image as output, do as follows:
(code is in OpenCV-Python, but for every-function, corresponding C functions are available in opencv.itseez.com)
for each pixel in image:
if pixel value >= 127: add 'x' to pixel value.
else : subtract 'x' from pixel value.
( 'x' is an arbitrary value.) Thus difference between light and dark pixels increases.
img = cv2.imread('brain.jpg',0)
bigmask = cv2.compare(img,np.uint8([127]),cv2.CMP_GE)
smallmask = cv2.bitwise_not(bigmask)
x = np.uint8([90])
big = cv2.add(img,x,mask = bigmask)
small = cv2.subtract(img,x,mask = smallmask)
res = cv2.add(big,small)
And below is the result obtained:
You could also check out the OpenCV CLAHE algorithm. Instead of equalizing the histogram globally, it splits up the image into tiles and equalizes those locally, then stitches them together. This can give a much better result.
With your image in OpenCV 3.0.0:
import cv2
inp = cv2.imread('inp.jpg',0)
clahe = cv2.createCLAHE(clipLimit=4.0, tileGridSize=(8,8))
res = clahe.apply(inp)
cv2.imwrite('res.jpg', res)
Gives something pretty nice
Read more about it here, though it's not super helpful:
http://docs.opencv.org/3.1.0/d5/daf/tutorial_py_histogram_equalization.html#gsc.tab=0
Although this post is a bit aged:
What about using "cvAddWeighted( )" ?
What it does is:
dst = src1*alpha + src2*beta + gamma
What I understand from applying brightness and contrast is, that one wants to do:
dst = src*contrast + brightness;
so if
src1 = input image
src2 = any image of same type as src1
alpha = contrast value
beta = 0.0
gamma = brightness value
dst = resulting Image (must be of same type as src1)
One should be pretty much done with the task, no?
This approch works for me using CvMat* images

Qt save Picture within non-picture File

How would I save a picture within any file (and not overwrite the existing data in it) and then be able to access it later? (for example, if it was the picture at the index of 0, you would access it by specifying 0 later, or hopefully something like that)
Use method:
bool QPixmap::save ( "fileName", const char * format = 0, int quality = -1 ) const

Resources