Alpine APK Package Repositories, how are the checksums calculated? - c

I'm trying to work out how the pull checksum for packages is calculated within Alpine APK package repositories. The documentation regarding the format is lacking in any detail.
When I run apk index -o APKINDEX.unsigned.tar.gz *.apk which generates the repository. When you extract the txt file from inside the gz, it contains the following...
C:Q17KXT6xFVWz4EZDIbkcvXQ/uz9ys=
P:redis-server
V:3.2.3-0
A:noarch
S:2784844
I:102400
T:An advanced key-value store
U:http://redis.io/
L:
D:linux-headers
I'm interested in how the very first line is generated. I've tried to read the actual source that's used to generate this, but I'm not a C programmer, so it's hard for me to comprehend as it jumps all over the place.
The two files mentioned in the documentation are database.c and package.c.
Incase this somewhat helps, the original APK file has these various hashes...
CRC32 = ac17ea88
MD5 = a035ecf940a67a6572ff40afad4f396a
SHA1 = eca5d3eb11555b3e0464321b91cbd743fbb3f72b
SHA256 = 24bc1f03409b0856d84758d6d44b2f04737bbc260815c525581258a5b4bf6df4

The pull checksum is the sha1sum of the second tar.gz file in the apk file, containing the .PKGINFO file.
The Alpine APK package is actually a concatenation in disguise of 3 tar.gz files.
We can split the package below using gunzip-split into 3 .gz files, then rename them to .tar.gz
./gunzip-split -s -o ./out/ strace-5.14-r0.apk
mv ./out/file_1.gz ./out/file_1.tar.gz
mv ./out/file_2.gz ./out/file_2.tar.gz
mv ./out/file_3.gz ./out/file_3.tar.gz
sha1sum ./out/file_2.tar.gz
7a266425df7bfd7ce9a42c71a015ea2ae5715838 out/file_2.tar.gz
tar tvf out/file_2.tar.gz
-rw-r--r-- root/root 702 2021-09-03 01:34 .PKGINFO
In the case of the strace package the checksum value can be derived as above:
apk index strace-5.14-r0.apk -o APKINDEX.tar.gz
tar xvf APKINDEX.tar.gz
cat APKINDEX
echo eiZkJd97/XzppCxxoBXqKuVxWDg=|base64 -d|xxd
00000000: 7a26 6425 df7b fd7c e9a4 2c71 a015 ea2a z&d%.{.|..,q...*
00000010: e571 5838 .qX8
When comparing them we see that they match.
References
https://github.com/martencassel/apk-tools/blob/master/README.md
https://gitlab.com/cg909/gunzip-split/-/releases
https://lists.alpinelinux.org/~alpine/devel/%3C257B6969-21FD-4D51-A8EC-95CB95CEF365%40ferrisellis.com%3E#%3C20180309152107.472e4144#vostro.util.wtbts.net%3E

So...
/* Internal cointainer for MD5 or SHA1 */
struct apk_checksum {
unsigned char data[20];
unsigned char type;
};
Basically take the C: value then chop off the Q from the front then base 64 decode. Chop off the last value (type which defaults to SHA1) then you have your sha1. This appears to be made of the CONTENTS of the package but that would take further looking into it.

You need to look here: https://git.alpinelinux.org/cgit/apk-tools/tree/src/blob.c#n492
It is apk_blob_pull_csum
First 'Q' stands for encoding
Next '1' stands for SHA1
Looks like this checksum is made database.c in apk_db_unpack_pkg:
apk_sign_ctx_init(&ctx.sctx, APK_SIGN_VERIFY_IDENTITY, &pkg->csum, db->keys_fd);
tar = apk_bstream_gunzip_mpart(bs, apk_sign_ctx_mpart_cb, &ctx.sctx);
r = apk_tar_parse(tar, apk_db_install_archive_entry, &ctx, TRUE, &db->id_cache);
but I'm not sure, because I failed to trace this code.
It is really not easy to understand what are they doing.

Related

Contiguous Hex file generation using GCC

I have a Hex file for STM32F427 that was built using GCC(gcc-arm-none-eabi) version 4.6 that had contiguous memory addresses. I wrote boot loader for loading that hex file and also added checksum capability to make sure Hex file is correct before starting the application.
Snippet of Hex file:
:1005C80018460AF02FFE07F5A64202F1D00207F5F9
:1005D8008E4303F1A803104640F6C821C2F2000179
:1005E8001A460BF053F907F5A64303F1D003184652
:1005F8000BF068F907F5A64303F1E80340F6FC1091
:10060800C2F2000019463BF087FF07F5A64303F145
:10061800E80318464FF47A710EF092FC07F5A643EA
:1006280003F1E80318460EF03DFC034607F5A64221
:1006380002F1E0021046194601F0F2FC07F56A5390
As you can see all the addresses are sequential. Then we changed the compiler to version 4.8 and i got the same type of Hex file.
But now we used compiler version 6.2 and the Hex file generated is not contiguous. It is somewhat like this:
:10016000B9BC0C08B9BC0C08B9BC0C08B9BC0C086B
:10017000B9BC0C08B9BC0C08B9BC0C08B9BC0C085B
:08018000B9BC0C08B9BC0C0865
:1001900081F0004102E000BF83F0004330B54FEA38
:1001A00041044FEA430594EA050F08BF90EA020FA5
As you can see after 0188 it is starting at 0190 means rest of 8 bytes(0189 to 018F) are 0xFF as they are not flashed.
Now boot loader is kind of dumb where we just pass the starting address and no of bytes to calculate the checksum.
Is there a way to make hex file in contiguous way as compiler 4.6 and compiler 4.8? the code is same in all the three times.
If post-processing the hex file is an option, you can consider using the IntelHex python library. This lets you manipulate hex file data (i.e. ignoring the 'markup'; record type, address, checksum etc) rather than as lines, will for instance create output with the correct line checksum.
A fast way to get this up and running could be to use the bundled convenience scripts hex2bin.py and bin2hex.py:
python hex2bin.py --pad=FF noncontiguous.hex tmp.bin
python bin2hex.py tmp.bin contiguous.hex
The first line converts the input file noncontiguous.hex to a binary file, padding it with FF where there is no data. The second line converts it the binary file back to a hex file.
The result would be
:08018000B9BC0C08B9BC0C0865
becomes
:10018000B9BC0C08B9BC0C08FFFFFFFFFFFFFFFF65
As you can see, padding bytes are added where the input doesn't have any data, equivalent to writing the input file to the device and reading it back out. Bytes that are in the input file are kept the same - and at the same address.
The checksum is also correct as changing the length byte from 0x08 to 0x10 compensates for the extra 0xFF bytes. If you padded with something else, IntelHex would output the correct checksum
You can skip the the creation of a temporary file by piping these: omit tmp.bin in the first line and replacing it with - in the second line:
python hex2bin.py --pad=FF noncontiguous.hex | python bin2hex.py - contiguous.hex
An alternative way could be to have a base file with all FF and use the hexmerge.py convenience script to merge gcc's output onto it with --overlap=replace
The longer, more flexible way, would be to implement your own tool using the IntelHex API. I've used this to good effect in situations similar to yours - tweak hex files to satisfy tools that are costly to change, but only handle hex files the way they were when the tool was written.
One of many possible ways:
Make your hex file with v6.2, e.g., foo.hex.
Postprocess it with this Perl oneliner:
perl -pe 'if(m/^:(..)(.*)$/) { my $rest=16-hex($1); $_ = ":10" . $2 . ("FF" x $rest) . "\n"; }' foo.hex > foo2.hex
Now foo2.hex will have all 16-byte lines
Note: all this does is FF-pad to 0x10 bytes. It doesn't check addresses or anything else.
Explanation
perl -pe '<some script>' <input file> runs <some script> for each line of <input file>, and prints the result. The script is:
if(m/^:(..)(.*)$/) { # grab the existing byte count into $1
my $rest=16 - hex($1); # how many bytes of 0xFF we need
$_ = ":10" . $2 . ("FF" x $rest) . "\n"; # make the new 16-byte line
# existing bytes-^^ ^^^^^^^^^^^^^^-pad bytes
}
Another solution is to change the linker script to ensure the preceding .isr_vector section ends on a 16 byte alignment, as the mapfile reveals that the following .text section is 16 byte aligned.
This will ensure there is no unprogrammed flash bytes between the two sections
You can use bincopy to fill all empty space with 0xff.
$ pip install bincopy
$ bincopy fill foo.hex
Use the -gap-fill option of objcopy, e.g.:
arm-none-eabi-objcopy --gap-fill 0xFF -O ihex firmware.elf firmware.hex

Why would file checksums inconsistently fail?

I created a ~2MiB file.
dd if=/dev/urandom of=file.bin bs=2M count=1
Then I copied that file a large number of times and generated a checksum for each (identical) copy.
for i in `seq 50000`;
do
name="file.${i}.bin"
cp file.bin "${name}"
sha512sum "${name}" > "${name}.sha512"
done
I then verified all of those checksummed files with a validation script to run sha512sum against each file.
for file in `find . -regex ".*\.sha512"`
do
sha512sum --check --quiet "${file}" || (
cat "${file}" && sha512sum "${file%.sha512}"
)
done
I just created these files, and when I validate them moments later, I see intermittent failures and inconsistencies in the data (console text truncated for readability)
will:/mnt/usb $ for file in `find ...
file.5602.bin: FAILED
sha512sum: WARNING: 1 computed checksum did NOT match
91fc201a3812e93ef3d4890 ... file.5602.bin
b176e8e3ea63a223130f3a0 ... ./file.5602.bin
The checksum files are all identical since the source files are all identical
The problem seems to be that my computer is, seemingly at random, generating the wrong checksum for some of my files when I go to validate. A different file fails the checksum every time, and files that previously failed will pass.
will:/mnt/usb $ for file in `find ...
sha512sum: WARNING: 1 computed checksum did NOT match
91fc201a3812e93ef3d4890 ... file.3248.bin
442a1d8805ed134c9ab5252 ... ./file.3248.bin
Keep in mind that all of these files are identical.
I see the same behavior with SATA SSD and HDD, and USB devices, with md5 and sha512, with xfs, btrfs, ext4, and vfat. I tried live booting to another OS. I see this same stranger behavior regardless. I also see rsync --checksum for these files thinks checksums are wrong and re-copies these files even though they have not changed.
What could explain this behavior? Since it's happening on multiple devices with all the scenarios I described, I doubt this is bit rot. My kernel logs show no obvious errors. I would assume this is a hardware issue based on my troubleshooting, but how can this be diagnosed? Is it the CPU, the motherboard, the RAM?
What could explain this behavior? How can this be diagnosed?
From what I've read, a number of issues could explain this behavior. Bad disk(s), bad PSU (power supply), bad RAM, filesystem issues.
I tried the following to determine what was happening. I repeated the experiment with different...
Disks
Types of disks (SDD vs HDD)
External drives (3.5 and 2.5 enclosures)
Flash drives (USB 2 and 3 on various ports)
Filesystems (ext4, vfat (fat32), xfs, btrfs)
Different PSU
Different OS (live boot)
Nothing seemed to resolve this.
Finally, I gave memtest86+ v5.0.1 a try via an Ubuntu live USB.
voila. It found bad memory. Through process of elimination I determined one of my memory sticks was bad, and then tested the other over night to ensure it was in good shape. I re-ran my experiment again and I am seeing consistent checksums on all my files.
What a subtle bug. I only noticed this bad behavior by accident. If I hadn't been messing around with file checksums, I do not think I would have found this bad RAM.
This makes me want to regularly schedule a routine in which I verify and test my RAM. A consequence of this bad memory stick is that some of my test data did end up corrupt, but more often than not, the checksum verifications were just interimmitent failures.
In one sample data pool, all the checksums start with cb2848ca0e1ff27202a309408ec76..., because all ~50,000 files are identical.
Though, there are two files that are corrupt, but this is not bit rot or file integrity damage.
What seems most likely is that these files were created with corruption because cp encountered bad RAM when I created these files. Those files consistently return bad checksums of 58fe24f0e00229e8399dc6668b9... and bd85b51065ce5ec31ad7ebf3..., while the other 49,998 files return the same checksum.
This has been a fun extremely frustrating experiment in debugging.

External file ressource on embedded system (C language with FAT)

My application/device is running on an ARM Cortex M3 (STM32), without OS but with a FatFs) and needs to access many resources files (audio, image, etc..)
The code runs from internal flash (ROM, 256Kb).
The resources files are stored on external flash (SD card, 4Gb).
There is not much RAM (32Kb), so malloc a complete file from package is not an option.
As the user has access to the resources folder for atomic update, I would like to package all theses resources files in a single (.dat, .rom, .whatever)
So the user doesn't mishandle theses data.
Can someone point me to a nice solution to do so?
I don't mind remapping fopen, fread, fseek and fclose in my application, but I would not like starting from scratch (coding the serializer, table of content, parser, etc...). My system is quite limited (no malloc, no framework, just stdlib and FatFs)
Thanks for any input you can give me.
note: I'm not looking for a solution where the resources are embedded IN the code (ROM) as obviously they are way too big for that.
It should be possible to use fatfs recursively.
Drive 0 would be your real device, and drive 1 would be a file on drive 0. You can implement the disk_* functions like this
#define BLOCKSIZE 512
FIL imagefile;
DSTATUS disk_initialize(BYTE drv) {
UINT r;
if(drv == 0)
return SD_initialize();
else if(drv == 1) {
r = f_open(&image, "0:/RESOURCE.DAT", FA_READ);
if(r == FR_OK)
return 0;
}
return STA_NOINIT;
}
DRESULT disk_read(BYTE drv, BYTE *buff, DWORD sector, DWORD count) {
UINT br, r;
if(drv == 0)
return SD_read_blocks(buff, sector, count);
else if(drv == 1) {
r = f_seek(&imagefile, sector*BLOCKSIZE);
if(r != FR_OK)
return RES_ERROR;
r = f_read(&imagefile, buff, count*BLOCKSIZE, &br);
if((r == FR_OK) && (br == count*BLOCKSIZE))
return RES_OK;
}
return RES_ERROR;
}
To create the filesystem image on Linux or other similar systems you'd need mkfs.msdos and the mtools package. See this SO post on how to do it. Might work on Windows with Cygwin, too.
To expand on what Joachim said above:
Popular choices of uncompressed (sometimes) archive formats are cpio, tar, and zip. Any of the 3 would work just fine.
Here are a few more in-depth comments on using TAR or CPIO.
TAR
I've used tar before for the exact purpose, on an stm32 with FatFS, so can tell you it works. I chose it over cpio or zip because of its familiarity (most developers have seen it), ease of use, and rich command line tools.
GNU Tar gives you fine-grained control over order in which the files are placed in the archive and regexes to manipulate file names (--xform) or --exclude paths. You can pretty much guarantee you can get exactly the archive you're after with nothing more than GNU Tar and a makefile. I'm not sure the same can be said for cpio or zip.
This means it worked well for my build environment, but your requirements may vary.
CPIO
The cpio has a much worse/harder to use set of command line tools than tar in my opinion. Which is why I steer clear of it when I can. However, its file format is a little lighter-weight and might be even simpler to parse (not that tar is hard).
The Linux kernel project uses cpio for initramfs images, so that's probably the best / most mature example on the internet that you'll find on using it for this sort of purpose.
If you grab any kernel source tree, the tool usr/gen_init_cpio.c can used to generate a cpio from a cpio listing file format described in that source file.
The extraction code is in init/initramfs.c.
ZIP
I've never used the zip format for this sort of purpose. So no real comment there.
Berendi found a very clever solution: use the existing fat library to access it recursively!
The implementation is quite simple, and after extensive testing, I'd like to post the code to use FatFs recursively and the commands used for single file fat generation.
First, lets generate a 100Mb FAT32 file:
dd if=/dev/zero of=fat.fs bs=1024 count=102400
mkfs.vfat -F 32 -r 112 -S 512 -v fatfile.fs
Create/push content into it:
echo HelloWorld on Virtual FAT >> helloworld.txt
mcopy -i fatfile.fs helloworld.txt ::/
Change the diskio.c file, to add Berendi's code but also:
DSTATUS disk_status ()
{
DSTATUS status = STA_NOINIT;
switch (pdrv)
{
case FATFS_DRIVE_VIRTUAL:
printf("disk_status: FATFS_DRIVE_VIRTUAL\r\n" );
case FATFS_DRIVE_ATA: /* SD CARD */
status = FATFS_SD_SDIO_disk_status();
}
}
Dont forget to add the enum for the drive name, and the number of volumes:
#define _VOLUMES 2
Then mount the virtual FAT, and access it:
f_mount(&VirtualFAT, (TCHAR const*)"1:/", 1);
f_open(&file, "1:/test.txt", FA_READ);
Thanks a lot for your help.

reprepro complains about the generated pbuilder debian.tar.gz archive md5

I have configured a private APT repository (using resources on internet like http://inodes.org/2009/09/14/building-a-private-ppa-on-ubuntu/) and I'm uploading for the first time my package containing the sources of my C++ application.
So reprepro repository is empty.
I use the following command in order to start the build:
sudo reprepro -V -b /srv/reprepro processincoming incoming
Then the build start, a lot of output is genearated and I can see that pbuilder is compiling the project source code and everything is fine. I can even find in the result/ folder debian packages etc...
But the build failed with a POST_BUILD_FAILED because it seems that pbuilder has changed the douane-testing_0.8.1-apt1.debian.tar.gz file and the md5 sum is now different as shown here:
File "pool/main/d/douane-testing/douane-testing_0.8.1-apt1.debian.tar.gz" is already registered with different checksums!
md5 expected: 97257ae2c5790b84ed7bb1b412f1d518, got: df78f88b97cadc10bc0a73bf86442838
sha1 expected: ae93c44593e821696f72bee4d91ce4b6f261e529, got: d6f910ca5707ec92cb71601a4f4c72db0e5f18d9
sha256 expected: c3fac5ed112f89a8ed8d4137b34f173990d8a4b82b6212d1e0ada1cddc869b0e, got: ebdcc9ead44ea0dd99f2dc87decffcc5e3efaee64a8f62f54aec556ac19d579c
size expected: 2334, got: 2344
There have been errors!
I don't understand why it is failing as when I compare the 2 packages (having those md5 sums) the content is strictly the same (I used a diff tool but no differences and no new or removed files).
The only thing I can see is that the archive from pbuild is bigger of 10 Bytes than the orginal one I have uploaded:
On my development machine, the file with the md5 97257ae2c5790b84ed7bb1b412f1d518 :
-rw-r--r-- 1 zedtux zedtux 2334 Feb 3 23:38 douane-testing_0.8.1-apt1.debian.tar.gz
On my server, the file with the md5 df78f88b97cadc10bc0a73bf86442838 :
-rw-r--r-- 1 root root 2344 Feb 5 00:58 douane-testing_0.8.1-apt1.debian.tar.gz
I have pbuild version 0.213 on my server.
What could be the reason of this behavior and how can I fix it ?
Edit
I'm suspecting an issue with the GPG key which looks missing and then files aren't signed so md5sum is different.
During the build process I have the following lines:
I: Extracting source
gpgv: Signature made Wed Feb 5 22:04:37 2014 UTC using RSA key ID 9474CF36
gpgv: Can't check signature: public key not found
dpkg-source: warning: failed to verify signature on ./douane-testing_0.8.1-apt1.dsc
Edit 2
I have tried to find the command to create manually the .debian.tar.gz file.
The best I've found is the following:
tar cv debian | gzip --no-name --rsyncable -9 > douane-testing_0.8.1-apt1.debian.tar.gz
I don't get the same result than dpkg-source but I tried the same command on my server (I should at least have the same size) but it's not matching...
Could it be that Debian and Ubuntu aren't compressing the same way ?
Finally after some evenings of research I found the solution on launchpad.net !
Found the solution. By default pbuilder calls dpkg-buildpackage like so:
DEBBUILDOPTS="$DEBBUILDOPTS -rfakeroot"
dpkg-buildpackage -us -uc $DEBBUILDOPTS
That causes dpkg-buildpackage to rebuild the diff.gz and .dsc files. Add a -b in there, and it won't. It also means the resulting .changes file will only reference the .deb file. Which is what you want, I think.
The easy solution is to add a line to your .pbuilderrc:
DEBBUILDOPTS="-b"
My previous answer is alright but is not complete.
Then I had the issue that reprepro complains about the source tarball (.orig.tar.xz).
But it was normal as I wasn't doing the packages correctly.
I have written a bash script which I'm executing in VM for each Ubuntu series.
This script was always doing everything from scratch, and was using dh_make --createorig argument and here is the issue.
The correct way is to generate once (for example on Ubuntu precise) and then re-use the .orig.tar.xz file and no more use the --createorig argument of dh_make.
I hope this could help someone :-)

What is causing the scaleX method of Imager class to fail?

This is a cross post from Perl Monks and Mahalo answers, where I have not received a satisfactory response yet. Thanks for your time and spirit:
Why do I get this error message from perl:
Can't call method "scaleY" on an undefined value at C:/strawberry/perl +/site/lib/ Image/Seek.pm line 137?
I am getting the error in the title when calling the Image::Seek module from my script. My script is basically a rehash of the module's suggested code.
Here's the error again:
Can't call method "scaleY" on an undefined value at C:/strawberry/perl +/site/lib/ Image/Seek.pm line 137.
Here's my code:
#!/usr/local/bin/perl
use Imager;
use Image::Seek qw(loaddb add_image query_id savedb);
loaddb("haar.db");
my $img = Imager->new("photo-1.jpg")
or die Imager->errstr;
# my $img = Imager->new();
# $img->open(file => "photo-1.jpg")or die Imager->errstr;
add_image($img, 1);
savedb("haar.db");
Here's the section of the Image::Seek module causing the issue:
sub add_image_imager {
my ($img, $id) = #_;
my ($reds, $blues, $greens);
require Imager;
my $thumb = $img->scaleX(pixels => 128)->scaleY(pixels => 128);
for my $y (0..127) {
my #cols = $thumb->getscanline(y => $y);
for (#cols) {
my ($r, $g, $b) = $_->rgba;
$reds .= chr($r); $blues .= chr($b); $greens .= chr($g);
}
}
addImage($id, $reds, $greens, $blues); }
Line 137 is:
my $thumb = $img->scaleX(pixels => 128)->scaleY(pixels => 128);
If I remove
->scaleY(pixels => 128)
then line 129:
my #cols = $thumb->getscanline(y => $y);
gives me essentially the same error.
At this point I'm just trying to add one image to the database. There is an image in the directory where I'm running the script to add the image, named "photo-216.jpg". If I change the name to "photo-1.jpg" or "photo-0.jpg" and change the corresponding "add_image" and "query_id" to respectively 1 or 0, it's the same result.
I do have a database that is 385 KB big that comes from running makedb.pl below, but it is filled with null characters. I renamed this "haar.db". This is the database that gives me the error. If I recreate the haar.db file as an empty one, then the script hangs and after a couple of minutes, it give this different message:
"This application has requested the Runtime to terminate it in an unusual way. Please contact the application's support team for more information."
If there is no "haar.db" the file still gives me the error in this post's title and unlike running makedb.pl, gives me no database named "haar.db".
By the way I get multiple examples of this post's title error also when trying to run this database filling script: http://www.drk7.jp/pub/imgseek/t/makedb.pl.txt/, which I was alluding to before. I obviously removed the .txt extension before trying it. The makedb.pl script is from this Japanese site: http://www.drk7.jp/MT/archives/001258.html.
If I run makedb.pl in a directory of 2423 scanned collectible postage stamps images, I get 362 instances of the error. The 2423 stamps is the number I have after removing the "small" thumbnail versions which I orignally thought might be causing the issue.
Could it be, that some of the images are less than 128 pixels and that is the issue? However if this is true why does the database get filled with null characters?...Unless they are not really null even though the editor I'm using, Notebook++, says they are.
Also note my images are of stamps which are only sometimes perfect squares. Otherwise, sometimes they are "landscape" sometimes "portrait". Maybe the issue is when the "landscape" scaled images get an X axis of 128 pixels and then their Y axis ends up less or much less. Could this be?
Thanks much
Update: Answer completely re-organized.
Image::Seek is not checking if
scaleX returned error. In your case, for some images, scaleX is failing.
You seem to know for which images scaleX is failing. So, leave your current
code aside, and put together a short test script:
#!/usr/bin/perl
use strict;
use warnings;
use Imager;
die "Specify image file name\n" unless #ARGV;
my ($imgfile) = #ARGV;
my $img = Imager->new;
$img->read( file => $imgfile )
or die "Cannot read '$imgfile': ", $img->errstr;
my $x_scaled = $img->scaleX( pixels => 128 )
or die 'scaleX failed: ', $img->errstr;
my $thumb = $x_scaled->scaleY( pixels => 128 )
or die 'scaleY failed: ', $x_scaled->errstr;
__END__
Running this test script, you got the error message:
Cannot read 'photo-1.jpg': format 'jpeg' not supported - formats bmp,
ico, pnm, raw, sgi, tga available for reading
indicating the underlying problem: When you installed Imager via Strawberry
Perl's cpan, the libraries for png, jpg etc were not installed. One
solution is to build those libraries with the gcc compiler provided with
Strawberry Perl.
First, you will need zlib.
C:\Temp\zlib-1.2.3> copy win32\Makefile.gcc Makefile
Set prefix = /strawberry/c/local in the Makefile. Compile. You may have to
manually copy the files zlib.h and zconf.h to
C:\strawberry\c\local\include and zlib1.dll, libz.a and libzdll.a to
C:\strawberry\c\local\lib (I don't know because I do not use Strawberry Perl very often and my Strawberry environment is very neglected.)
Then, get libpng. I used the source archive without config script.
C:\Temp\libpng-1.2.38> copy scripts\makefile.mingw Makefile
C:\Temp\libpng-1.2.38> make prefix=/strawberry/c/local ZLIBLIB=/strawberry/c/local/lib ZLIBINC=/strawberry/c/local/include
This built the PNG library. Again, you may have to manually copy the .dll,
.a and .h files to the appropriate directories. I did because of my less
than perfect Strawberry environment.
Finally, get the JPEG library.
C:\Temp\jpeg-7> copy Makefile.ansi Makefile
Make sure to edit this file and set CC=gcc. Customize jconfig.h according
to the instructions in jconfig.txt. I used jconfig.dj as a basis.
You might also want to set
CFLAGS= -O2
SYSDEPMEM= jmemansi.o
in Makefile, and
#define DEFAULT_MAX_MEM 4*1024*1024
in jconfig.h. After running make, again copy the files as needed (and as explained by install.txt).
Once the libraries are installed, you can
C:\Temp> SET IM_INCPATH=C:\strawberry\c\local\include
C:\Temp> SET IM_LIBPATH=C:\strawberry\c\local\lib
C:\Temp> cpan
cpan> force install Imager
which yields:
gif: includes not found - libraries not found
ungif: includes not found - libraries not found
jpeg: includes found - libraries found
png: includes found - libraries found
tiff: includes not found - libraries not found
freetype2: includes not found - libraries not found
freetype2: not available
T1-fonts: includes not found - libraries not found
TT-fonts: includes not found - libraries not found
w32: includes found - libraries found
If all of this is too much work, it is ... sigh I just realized the
binaries are available at GnuWin32.

Resources