Reproject by GDAL GDALAutoCreateWarpedVRT in C - c

I want to re-project an HDF from UTM(WGS84) to sinusoidal(WGS84), so I try to use GDALAutoCreateWarpedVRT to finished it. The code is below:
hSrcDS = (GDALDataset*)GDALOpen("HJ1ACCD1.hdf", GA_ReadOnly);
const char *pszSrcWKT = NULL;
char* pszDstWKT = NULL;
//pszSrcWKT = ProjectionStr;
pszSrcWKT=GDALGetProjectionRef(hSrcDS);
CPLAssert( pszSrcWKT != NULL &&strlen(pszSrcWKT) > 0 );
OGRSpatialReference oSRS;
oSRS.SetSinusoidal(0,0,0);
oSRS.SetWellKnownGeogCS("WGS84");
oSRS.exportToWkt(&pszDstWKT );
GDALWarpOptions*psWarpOptions = GDALCreateWarpOptions();
psWarpOptions->dfWarpMemoryLimit=500*1024*1024;
hDstDS=(GDALDataset*)(GDALDataset*)GDALAutoCreateWarpedVRT(hSrcDS,pszSrcWKT,pszDstWKT,GRA_Bilinear ,20,psWarpOptions);
GDALDriver *poDriverTiff;
poDriverTiff=GetGDALDriverManager()->GetDriverByName("GTIFF");
poDriverTiff->CreateCopy("d:\\toto.tif",(GDALDataset*)hDstDS,false,NULL,NULL,NULL);
When I set oSRS.SetSinusoidal(0,0,0),the result seems good, but the resolution is doubled (from 30 to 60). It's so weird.

According to the API docs for GDALAutoCreateWarpedVRT:
The GDALSuggestedWarpOutput() function is used to determine the bounds and resolution of the output virtual file which should be large enough to include all the input image
There is also a GDALSuggestedWarpOutput2() function to help suggest output file size for a similar set of requirements.

Related

Find specific number of a word from the beginning in string

I've been gathering information using api calls from my jira. Information gathered is saved in a body file and it has the following content:
No tickets:
{"startAt":0,"maxResults":50,"total":0,"issues":[]}{"startAt":0,"maxResults":50,"total":0,"issues":[]}
One Ticket:
{"expand":"names,schema","startAt":0,"maxResults":50,"total":1,"issues":[{"expand":"operations,versionedRepresentations,editmeta,changelog,renderedFields","id":"456881","self":"https://myjira...com","key":"TICKET-1111","fields":{"summary":"[TICKET] New Test jira","created":"2018-12-17T01:47:09.000-0800"}}]}{"expand":"names,schema","startAt":0,"maxResults":50,"total":1,"issues":[{"expand":"operations,versionedRepresentations,editmeta,changelog,renderedFields","id":"456881","self":"https://myjira...com","key":"TICKET-1111","fields":{"summary":"[TICKET] New Test jira","created":"2018-12-17T01:47:09.000-0800"}}]}
Two Tickets:
{expand:schema,names,startAt:0,maxResults:50,total:2,issues:[{expand:operations,versionedRepresentations,editmeta,changelog,renderedFields,id:456881,self:https://myjira...com,key:TICKET-1111,fields:{summary:[TICKET] New Test jira,created:2018-12-17T01:47:09.000-0800}},{expand:operations,versionedRepresentations,editmeta,changelog,renderedFields,id:320281,self:https://myjira...com,key:TICKET-2222,fields:{summary:[TICKET] Test jira,created:2016-03-18T07:58:52.000-0700}}]}{expand:schema,names,startAt:0,maxResults:50,total:2,issues:[{expand:operations,versionedRepresentations,editmeta,changelog,renderedFields,id:456881,self:https://myjira...com,key:TICKET-1111,fields:{summary:[TICKET] New Test jira,created:2018-12-17T01:47:09.000-0800}},{expand:operations,versionedRepresentations,editmeta,changelog,renderedFields,id:320281,self:https://myjira...com,key:TICKET-2222,fields:{summary:[TICKET] Test jira,created:2016-03-18T07:58:52.000-0700}}]}
etc..
Using this code I've been able to gather total open tickets:
std::ifstream t("BodyOpenIssues.out");
std::string BodyString((std::istreambuf_iterator<char>(t)),
std::istreambuf_iterator<char>());
// Removing Quotes
BodyString.erase(std::remove(BodyString.begin(), BodyString.end(), '"'), BodyString.end());
int Result = 0;
unsigned first = BodyString.find("total:");
unsigned last = BodyString.find(",issues");
std::string TotalOpenIssues = BodyString.substr(first + 6, last - (first + 6));
Result = std::stoi(TotalOpenIssues);
return Result;
Using a second function I'm trying to get the keys based on total open tickets.
if (GetOpenIssuesNumber() > 0)
{
std::ifstream t("BodyOpenIssues.out");
std::string BodyString((std::istreambuf_iterator<char>(t)),
std::istreambuf_iterator<char>());
// Removing Quotes
BodyString.erase(std::remove(BodyString.begin(), BodyString.end(), '"'), BodyString.end());
unsigned first = BodyString.find("key:TICKET-");
unsigned last = BodyString.find(",fields");
std::string TotalOpenIssues = BodyString.substr(first + 11, last - (first + 11));
String^ Result = gcnew String(TotalOpenIssues.c_str());
return "TICKET-" + Result;
}
else
{
return "No open issues found";
}
What I mean is:
If Total is 1 to search from the beginning and find the first key TICKET-1111.
If Total is 2 to search from the beginning and get the first key TICKET-1111 then to continue from there and to find the next key TICKET-2222.
And based on that total to find that many keys in that string.
I got lost from all the casting between the types as ifstream reads the file and I save the result in std::string. After the find I save the result in System::String to use it in my Label.. I've been researching and found out that I can use char array but I can't make it dynamic based on BodyString.length().
If more information is required please let me know.
Any suggestions are really appreciated! Thank you in advance!
I went for nlohmann json library. It has everything I need. Thank you Walnut!
These are formatted as JSON. You should use a JSON library for C++ and parse the files with that. Using search/replace is unnecessary complicated and you will likely run into corner cases you haven't considered sooner or later (do you really want the code to randomly miss tickets, etc.?). Also String^ is not C++. Are you writing C++/CLI instead of C++? If so, please tag c++-cli instead of c++. – walnut

NAudio trimming mp3 file

I'm trying to trim a mp3 file.
using this code:
private void TrimMp3(string open, string save)
{
using (var mp3FileReader = new Mp3FileReader(open))
using (var writer = File.Create(save))
{
var startPostion = TimeSpan.FromSeconds(60);
var endPostion = TimeSpan.FromSeconds(90);
mp3FileReader.CurrentTime = startPostion;
while (mp3FileReader.CurrentTime < endPostion)
{
var frame = mp3FileReader.ReadNextFrame();
if (frame == null) break;
writer.Write(frame.RawData, 0, frame.RawData.Length);
}
}
}
"open" is the file I'm trimming and "save" is the location I'm saving.
The trimming works but not fully. The new file does start from 60 seconds but it keeps going and not stopping at 90 seconds. For example if the file is 3 minutes it will start at 1 minute and end at 3. Its like the while is always true. What am I doing wrong here?
Thanks in advance!
I have no idea what your Mp3FileReader is doing there. But your while loop looks odd. Does mp3FileRead.ReadNextFrame() also change mp3FileReader.CurrentTime ? If not then there is your problem.
You should atleast do mp3FileReader.CurrentTime + 1Frame. Otherwise your currenttime is never changed and loop will always be true
In NAudio 1.8.0, Mp3FileReader.ReadNextFrame does not progress CurrentTime, although I checked in a fix for that recently.
So you can either get the latest NAudio code, or make use of the SampleCount property on each Mp3Frame to accurately keep track of how far through you are yourself.

Image Analysis Program Based on Hashcode Method Resulting in Errors

I am trying to write a program that will recognize an image on the screen, compare it against a resource library, and then calculate based on the result of the image source.
The first thing that I did was to create the capture screen function which looks like this:
private Bitmap Screenshot()
{
System.Drawing.Bitmap Table = new System.Drawing.Bitmap(88, 40, PixelFormat.Format32bppArgb);
System.Drawing.Graphics g = System.Drawing.Graphics.FromImage(RouletteTable);
g.CopyFromScreen(1047, 44, 0, 0, Screen.PrimaryScreen.Bounds.Size);
return Table;
}
Then, I analyze this picture. The first method I used was to create two for loops and analyze both the bitmaps pixel by pixel. The problem with this method was time, it took a long time to complete 37 times. I looked around and found the convert to bytes and the convert to hash methods. This is the result:
public enum CompareResult
{
ciCompareOk,
ciPixelMismatch,
ciSizeMismatch
};
public CompareResult Compare(Bitmap bmp1, Bitmap bmp2)
{
CompareResult cr = CompareResult.ciCompareOk;
//Test to see if we have the same size of image
if (bmp1.Size != bmp2.Size)
{
cr = CompareResult.ciSizeMismatch;
}
else
{
//Convert each image to a byte array
System.Drawing.ImageConverter ic = new System.Drawing.ImageConverter();
byte[] btImage1 = new byte[1];
btImage1 = (byte[])ic.ConvertTo(bmp1, btImage1.GetType());
byte[] btImage2 = new byte[1];
btImage2 = (byte[])ic.ConvertTo(bmp2, btImage2.GetType());
//Compute a hash for each image
SHA256Managed shaM = new SHA256Managed();
byte[] hash1 = shaM.ComputeHash(btImage1);
byte[] hash2 = shaM.ComputeHash(btImage2);
for (int i = 0; i < hash1.Length && i < hash2.Length&& cr == CompareResult.ciCompareOk; i++)
{
if (hash1[i] != hash2[i])
cr = CompareResult.ciPixelMismatch;
}
}
return cr;
}
After I analyze the two bitmaps in this function, I call it in my main form with the following:
Bitmap Table = Screenshot();
CompareResult success0 = Compare(Properties.Resources.Result0, Table);
if (success0 == CompareResult.ciCompareOk)
{ double result = 0; Num.Text = result.ToString(); goto end; }
The problem I am getting is that once this has all been accomplished, I am always getting a cr value of ciPixelMismatch. I cannot get the images to match, even though the images are identical.
To give you a bit more background on the two bitmaps, they are approximately 88 by 40 pixels, and located at 1047, 44 on the screen. I wrote a part of the program to automatically take a picture of that area so I did not have to worry about the wrong location or size being captured:
Table.Save("table.bmp");
After I took the picture and saved it, I moved it from the bin folder in the project directly to the resource folder and ran the program again. Despite all of this, the result is still ciPixelMismatch. I believe the problem lies within the format that the pictures are being saved as. I believe that despite them being the same image, they are being analyzed in different formats, maybe one of the pictures contains a bit more information than the other which is causing the mismatch. Can somebody please help me solve this problem? I am just beginning with my c# programming, I am 5 days into the learning process, and I am really at a loss for this.
Yours sincerely,
Samuel

put live audio input data into array

Hey I am using GetUserMedia() to capture audio input from user's microphone. Meanwhile I want to put captured values into an array so I can manipulate with them. I am using the following code but the problem is that my array gets filled with value 128 all the time (I print the results in console for now), and I can't find my mistake. Can someone help me find my mistake?
//create a new context for audio input
context = new webkitAudioContext();
var analyser = null;
var dataarray = [];
getLiveInput = function() {
navigator.webkitGetUserMedia({audio: true},onStream,onStreamError);
};
function onStream(stream)
{
var input = context.createMediaStreamSource(stream);
analyser = context.createAnalyser();
var str = new Uint8Array(analyser.frequencyBinCount);
analyser.getByteTimeDomainData(str);
for (var i = 0; i < str.length; i++) {
var value = str[i];
dataarray.push(value);
console.log(dataarray)
}//end for loop
}//end function
function onStreamError(e) {
console.error('Streaming failed: ', e);
};
The values returned from getByteTimeDomainData are 8 bit integers, from 0 to 255. 128, which is half way, basically means "no signal". It is the equivalent of 0 in PCM audio data from -1 to 1.
But ANYWAY - there are a couple problems:
First, you're never connecting the input to the analyser. You need input.connect(analyser) before you call analyser.getByteTimeDomainData().
The second problem isn't with your code so much as it's just an implementation issue.
Basically, the gotStream function only gets called once - and getByteTimeDomainData only returns data for 1024 samples worth of audio (a tiny fraction of a second). The problem is, this all happens so quickly and for such a short period of time after the stream gets created, that there's no real input yet. Try wrapping the analyser.getByteTimeDomainData() call and the loop that follows it in a 1000ms setTimeout and then whistle into your microphone as soon as you give the browser permission to record. You should see some values other than 128.
Here's an example: http://jsbin.com/avasav/5/edit

Qt save Picture within non-picture File

How would I save a picture within any file (and not overwrite the existing data in it) and then be able to access it later? (for example, if it was the picture at the index of 0, you would access it by specifying 0 later, or hopefully something like that)
Use method:
bool QPixmap::save ( "fileName", const char * format = 0, int quality = -1 ) const

Resources