Is it possible to download the ISO images from a different location? - colima

I am currently working in a very restricted environment. Therefore I can't download the Alpine images from the internet. Actually I could, but the proxy of the enterprise I am working for has problems to handle large files. Furthermore it would be more efficient to download the files from a local storage.
The file .lima/colima/lima.yaml contains the following section with the specification of the image locations:
images:
- location: https://github.com/abiosoft/alpine-lima/releases/download/colima-v0.4.2-1/alpine-lima-clm-3.14.6-aarch64.iso
arch: aarch64
digest: sha512:8e05be487fb6c3cf45f6378ca667f2f175c4fb07f162458ff9254f5ef4290ea94f176a6f2f9f854ab60f9668865e4d9bf0d9b24f0bb88dca4e30596855cc4013
- location: https://github.com/abiosoft/alpine-lima/releases/download/colima-v0.4.2-1/alpine-lima-clm-3.14.6-x86_64.iso
arch: x86_64
digest: sha512:229121f3ff3cb645a602e3f21d687207ad14c48330001330430c84e88fb0311a70b4a94250c2e24e80e8d3522ee573e169fef76337214136d1dde9bbc4ec1354
Everytime I edit this section manually, it is overwritten by my next attempt to run colima.
Is it possible to upload the images to a local storage server, e.g. a raw repository provided by Nexus, and to replace the URLs above by the local ones at all?

The first step is to store store the needed images somewhere. It might be a artifact repository like Nexus or Artifactory, or, according to the developers of the underlying Lima VM, even a filesystem.
To tell colima to use your local copy of the ISO images, you must run colima onces.
> colima start
As written above, if the images can't be downloaded, colima will fail to start. But it created a so calledn context in the configuration of the Lima VM. The next step is to edit this context by entering the command:
> limactl edit colima
This spawns an editor with the configuration, there you also can change the location of the images.
After exiting the editor and saving the configuration, you can start colima again with colima start and the images will be downloaded from the specified location.
This solution was written for colima 0.4.4 and Lima VM 0.11.2.
The developers behind colima will see, if there will be a better solution for such cases to change the locaton of the images.

Related

How to support multi-architecture docker-compose configuration for devcontainer.json?

In our engineering team we have people using older macbook pros as well as the new M1 (ARM) chips. We currently have 2 different docker-compose.yaml files that pull in different docker images for our data services based on which architecture the host computer is using. This is not ideal, but currently works fine. I want to make use of devcontainer.json so that our app layer could also live in docker to make setting up a new machine within our eng org very easy. The problem I'm running into is I'm not sure how to tell devcontainer.json which docker-compose.yaml file to use based on which architecture is being used.
My current thought is to just have the developer set an env var on their host system that derives their arch and then utilize that env var within devcontainer.json to point to the correct docker-compose.yaml file, but I'm wondering if there is a better way to achieve my goal.
I noticed that there is an open issue within the vscode-remote-release repo but I think that pertains to the app code image that gets built. Not quite the same situation I'm in, but the solution is probably one part of the solution to my question.

Access Chromium OS source code in archive file

I use Chrome OS for a lot of my programming (oddly enough, but I do), and I wanted to access the Chromium OS source code because I like Chrome OS, and I want to see the insides of how it works. Only problem is, I can't access the CLI to do all the checkout stuff, even though I can enable a Debian VM (I don't want to, because I use it for everything, so I don't want to mess it up), but I can extract a .zip or .tar.* file through a Chrome extension. So basically, I want to know how (and if) to access an archive file of the Chromium OS source.
Sorry, but there is no single archive provided for all of the Chromium OS source. A single checkout is made up of 100's of independent git repositories listed here:
https://chromium.googlesource.com/chromiumos/manifest/+/HEAD/full.xml
If you want a checkout, you'll need to follow the documentation:
https://chromium.googlesource.com/chromiumos/docs/+/HEAD/developer_guide.md

Debugging Codename One app on Android Studio

I need to debug my CN1 app on Android. That's why I successfully followed the instructions given in this Codename One tutorial (I copied and updated the gradle files dependencies content as explained).
I am a little bit confused now with the updated sources part.
There is a portion we didn’t get into with the video, copying updated sources directly without sending a build. This is possible if you turn on the new Android Java 8 support. At this point you should be able to remove the libs jar file which contains your compiled data and place your source code directly into the native project for debugging on the device.
If I change things in the native implementation file and if I launch the debug process it seems to work. But do I have to remove the userClasses.jar file from libs directory ? When is this jar file being called actually ?
Furthermore can I also make changes to the CN1 code from Android Studio (eg changes in Main Class) or these need a proper build process on the servers ?
UPDATE November 22nd 2016
In my experience the first time you want to debug your app in Android you need to copy paste your source files AND the userClasses.jar (in libs folder). When you update ONLY the native implementation files you can run a debug without sending a build. But if you change something in the CN1 code it won't be reflected in Android as long as you don't update the userClasses.jar (seems logical since Android does not know anything about CN1).
Any piece of information appreciated,
Cheers,
The build server doesn't have access to your code, just the jar with bytecode/data files and the user jar is "almost" that jar.
We run some bytecode processing such as retrolambda and other things so it isn't exactly what you compiled when you built the project.
If you copy and paste your source directory into the project you will need to remove that jar so you won't see duplicate classes. You will also need to enable Android Studios Java 8 language support to get that to work.

Work on a remote project with Eclipse via SSH

I have the following boxes:
a) A Windows box with Eclipse CDT,
b) A Linux box, accessible for me only via SSH.
Both the compiler and the hardware required to build and run my project is only on machine B.
I'd like to work "transparently" from a Windows box on that project using Eclipse CDT and be able to build, run and debug the project remotely from within the IDE.
How do I set up that:
The building will work? Any simpler solutions than writing a local makefile which would rsync the project and then call a remote makefile to initiate the actual build? Does Eclipse managed build have a feature for that?
The debugging will work?
Preferably - the Eclipse CDT code indexing will work? Do I have to copy all required header files from machine B to machine A and add them to include path manually?
Try the Remote System Explorer (RSE). It's a set of plug-ins to do exactly what you want.
RSE may already be included in your current Eclipse installation. To check in Eclipse Indigo go to Window > Open Perspective > Other... and choose Remote System Explorer from the Open Perspective dialog to open the RSE perspective.
To create an SSH remote project from the RSE perspective in Eclipse:
Define a new connection and choose SSH Only from the Select Remote System Type screen in the New Connection dialog.
Enter the connection information then choose Finish.
Connect to the new host. (Assumes SSH keys are already setup.)
Once connected, drill down into the host's Sftp Files, choose a folder and select Create Remote Project from the item's context menu. (Wait as the remote project is created.)
If done correctly, there should now be a new remote project accessible from the Project Explorer and other perspectives within eclipse. With the SSH connection set-up correctly passwords can be made an optional part of the normal SSH authentication process. A remote project with Eclipse via SSH is now created.
The very simplest way would be to run Eclipse CDT on the Linux Box and use either X11-Forwarding or remote desktop software such as VNC.
This, of course, is only possible when you Eclipse is present on the Linux box and your network connection to the box is sufficiently fast.
The advantage is that, due to everything being local, you won't have synchronization issues, and you don't get any awkward cross-platform issues.
If you have no eclipse on the box, you could thinking of sharing your linux working directory via SMB (or SSHFS) and access it from your windows machine, but that would require quite some setup.
Both would be better than having two copies, especially when it's cross-platform.
I'm in the same spot myself (or was), FWIW I ended up checking out to a samba share on the Linux host and editing that share locally on the Windows machine with notepad++, then I compiled on the Linux box via PuTTY. (We weren't allowed to update the ten y/o versions of the editors on the Linux host and it didn't have Java, so I gave up on X11 forwarding)
Now... I run modern Linux in a VM on my Windows host, add all the tools I want (e.g. CDT) to the VM and then I checkout and build in a chroot jail that closely resembles the RTE.
It's a clunky solution but I thought I'd throw it in to the mix.
My solution is similar to the SAMBA one except using sshfs. Mount my remote server with sshfs, open my makefile project on the remote machine. Go from there.
It seems I can run a GUI frontend to mercurial this way as well.
Building my remote code is as simple as: ssh address remote_make_command
I am looking for a decent way to debug though. Possibly via gdbserver?
I tried ssh -X but it was unbearably slow.
I also tried RSE, but it didn't even support building the project with a Makefile (I'm being told that this has changed since I posted my answer, but I haven't tried that out)
I read that NX is faster than X11 forwarding, but I couldn't get it to work.
Finally, I found out that my server supports X2Go (the link has install instructions if yours does not). Now I only had to:
download and unpack Eclipse on the server,
install X2Go on my local machine (sudo apt-get install x2goclient on Ubuntu),
configure the connection (host, auto-login with ssh key, choose to run Eclipse).
Everything is just as if I was working on a local machine, including building, debugging, and code indexing. And there are no noticeable lags.
I had the same problem 2 years ago and I solved it in the following way:
1) I build my projects with makefiles, not managed by eclipse
2) I use a SAMBA connection to edit the files inside Eclipse
3) Building the project:
Eclipse calles a "local" make with a makefile which opens a SSH connection
to the Linux Host. On the SSH command line you can give parameters which
are executed on the Linux host. I use for that parameter a makeit.sh shell script
which call the "real" make on the linux host.
The different targets for building you can give also by parameters from
the local makefile --> makeit.sh --> makefile on linux host.
The way I solved that one was:
For windows:
Export the 'workspace' directory from the Linux machine using samba.
Mount it locally in windows.
Run Eclipse, using the mounted 'workspace' directory as the eclipse workspace.
Import the project you want and work on it.
For Linux:
Mount the 'workspace' directory using sshfs
Run Eclipse.
Run Eclipse, using the mounted 'workspace' directory as the eclipse workspace.
Import the project you want and work on it.
In both cases you can either build and run through Eclipse, or build on the remote machine via ssh.
For this case you can use ptp eclipse https://eclipse.org/ptp/ for source browsing and building.
You can use this pluging to debug your application
http://marketplace.eclipse.org/content/direct-remote-c-debugging
How to edit in Eclipse locally, but use a git-based script I wrote (sync_git_repo_from_pc1_to_pc2.sh) to synchronize and build remotely
The script I wrote to do this is sync_git_repo_from_pc1_to_pc2.sh.
Readme: README_git-sync_repo_from_pc1_to_pc2.md
Update: see also this alternative/competitor: GitSync:
How to use Sublime over SSH
https://github.com/jachin/GitSync
This answer currently only applies to using two Linux computers [or maybe works on Mac too?--untested on Mac] (syncing from one to the other) because I wrote this synchronization script in bash. It is simply a wrapper around git, however, so feel free to take it and convert it into a cross-platform Python solution or something if you wish
This doesn't directly answer the OP's question, but it is so close I guarantee it will answer many other peoples' question who land on this page (mine included, actually, as I came here first before writing my own solution), so I'm posting it here anyway.
I want to:
develop code using a powerful IDE like Eclipse on a light-weight Linux computer, then
build that code via ssh on a different, more powerful Linux computer (from the command-line, NOT from inside Eclipse)
Let's call the first computer where I write the code "PC1" (Personal Computer 1), and the 2nd computer where I build the code "PC2". I need a tool to easily synchronize from PC1 to PC2. I tried rsync, but it was insanely slow for large repos and took tons of bandwidth and data.
So, how do I do it? What workflow should I use? If you have this question too, here's the workflow that I decided upon. I wrote a bash script to automate the process by using git to automatically push changes from PC1 to PC2 via a remote repository, such as github. So far it works very well and I'm very pleased with it. It is far far far faster than rsync, more trustworthy in my opinion because each PC maintains a functional git repo, and uses far less bandwidth to do the whole sync, so it's easily doable over a cell phone hot spot without using tons of your data.
Setup:
Install the script on PC1 (this solution assumes ~/bin is in your $PATH):
git clone https://github.com/ElectricRCAircraftGuy/eRCaGuy_dotfiles.git
cd eRCaGuy_dotfiles/useful_scripts
mkdir -p ~/bin
ln -s "${PWD}/sync_git_repo_from_pc1_to_pc2.sh" ~/bin/sync_git_repo_from_pc1_to_pc2
cd ..
cp -i .sync_git_repo ~/.sync_git_repo
Now edit the "~/.sync_git_repo" file you just copied above, and update its parameters to fit your case. Here are the parameters it contains:
# The git repo root directory on PC2 where you are syncing your files TO; this dir must *already exist*
# and you must have *already `git clone`d* a copy of your git repo into it!
# - Do NOT use variables such as `$HOME`. Be explicit instead. This is because the variable expansion will
# happen on the local machine when what we need is the variable expansion from the remote machine. Being
# explicit instead just avoids this problem.
PC2_GIT_REPO_TARGET_DIR="/home/gabriel/dev/eRCaGuy_dotfiles" # explicitly type this out; don't use variables
PC2_SSH_USERNAME="my_username" # explicitly type this out; don't use variables
PC2_SSH_HOST="my_hostname" # explicitly type this out; don't use variables
Git clone your repo you want to sync on both PC1 and PC2.
Ensure your ssh keys are all set up to be able to push and pull to the remote repo from both PC1 and PC2. Here's some helpful links:
https://help.github.com/en/github/authenticating-to-github/connecting-to-github-with-ssh
https://help.github.com/en/github/authenticating-to-github/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent
Ensure your ssh keys are all set up to ssh from PC1 to PC2.
Now cd into any directory within the git repo on PC1, and run:
sync_git_repo_from_pc1_to_pc2
That's it! About 30 seconds later everything will be magically synced from PC1 to PC2, and it will be printing output the whole time to tell you what it's doing and where it's doing it on your disk and on which computer. It's safe too, because it doesn't overwrite or delete anything that is uncommitted. It backs it up first instead! Read more below for how that works.
Here's the process this script uses (ie: what it's actually doing)
From PC1: It checks to see if any uncommitted changes are on PC1. If so, it commits them to a temporary commit on the current branch. It then force pushes them to a remote SYNC branch. Then it uncommits its temporary commit it just did on the local branch, then it puts the local git repo back to exactly how it was by staging any files that were previously staged at the time you called the script. Next, it rsyncs a copy of the script over to PC2, and does an ssh call to tell PC2 to run the script with a special option to just do PC2 stuff.
Here's what PC2 does: it cds into the repo, and checks to see if any local uncommitted changes exist. If so, it creates a new backup branch forked off of the current branch (sample name: my_branch_SYNC_BAK_20200220-0028hrs-15sec <-- notice that's YYYYMMDD-HHMMhrs--SSsec), and commits any uncommitted changes to that branch with a commit message such as DO BACKUP OF ALL UNCOMMITTED CHANGES ON PC2 (TARGET PC/BUILD MACHINE). Now, it checks out the SYNC branch, pulling it from the remote repository if it is not already on the local machine. Then, it fetches the latest changes on the remote repository, and does a hard reset to force the local SYNC repository to match the remote SYNC repository. You might call this a "hard pull". It is safe, however, because we already backed up any uncommitted changes we had locally on PC2, so nothing is lost!
That's it! You now have produced a perfect copy from PC1 to PC2 without even having to ensure clean working directories, as the script handled all of the automatic committing and stuff for you! It is fast and works very well on huge repositories. Now you have an easy mechanism to use any IDE of your choice on one machine while building or testing on another machine, easily, over a wifi hot spot from your cell phone if needed, even if the repository is dozens of gigabytes and you are time and resource-constrained.
Resources:
The whole project: https://github.com/ElectricRCAircraftGuy/eRCaGuy_dotfiles
See tons more links and references in the source code itself within this project.
How to do a "hard pull", as I call it: How do I force "git pull" to overwrite local files?
Related:
git repository sync between computers, when moving around?

Icons from remote files

I have started coding an FTP client application (for fun). I’m trying to represent remotely hosted files with icons. For example, let’s say I’m browsing the root folder of an FTP server (/) and want to display the Backup.zip file with the icon association from that client operating system. On some systems, this may be the windows compression icon and other operating systems this may be WinZip or WinRAR icons.
I have the client browsing local files with the SHGetFileInfo() function. This works great with files that are local, however, this function requires the physical file in order to retrieve the associated icon. So, this will not work with remotely hosted files. I have found some samples of loading icons given a file extension, and this is really where the question comes in... What would be the best strategy to get icons associated to remote files?
Go to the registry every time and look up extension to icon associations
Create 1 byte files with each extension and use the SHGetFileInfo() function for remote files (using local 1 byte files as association for remote files)
Other strategies???
What would a professional software company creating an FTP client do?
Thank you for your time.
-Jessy Houle
I suggest that you don't go to the registry every time: go if you need to, but if you've already been for a given filetype then remember/cache that result (within your program) and reuse it.
Use the procedure here from a previous Stack Overflow question on the same idea and uses the registry instead of an actual file.
How can I get the filetype icon that Windows Explorer shows?

Resources