Is it possible to configure `.devcontainer` settings globally? - vscode-remote

My development workflow uses one docker container spun up through Docker Compose for all projects.
To the best I can tell the Remote - Container Extension only allows a configuration file to be created per project.
Having the ability to setup the extension either through a global devcontainer.json or in the User settings would be ideal so that:
It can use a docker-compose.yml file external to the project files when attaching to a running container
A user other than root can be set when attaching to the container through the extension
Take advantage that Docker Compose allows you to attach using the container name instead of id, as I rebuild the containers often which causes previous workspaces to fail when attempting to reconnect to the remote.
Perhaps I'm missing something obvious, but I've read through the documentation and also went to the user settings to look through the autocomplete of options. I kind of assumed since devcontainer.json is similar in nature to launch.json it too could be set in User settings, but it was not an available option like "launch" is.

Related

How to glue together Vert.x web and Kotlin react using Gradle in Kotlin MPP

Problem
It is not clear to me how to configure the Kotlin MPP (multiplatform platform project) using Gradle (Kotlin DSL) to use Vert.x web for Kotlin/JVM target with Kotlin React on the Kotlin/JS target.
Update
You can check out updated minimal example for a working solution
inspired by an approach of Alexey Soshin.
What I've tried
Have a look at my minimal example on GitHub of a Kotlin MPP with the Vert.x web server on the JVM target and Kotlin React on the JS target.
You can make it work if you:
First run Gradle task browserDevelopentRun (I don't understand magick behind it) and after browser opens and renders React SPA (single page application) you can
stop that task and then
start the Vert.x backend with task run.
After that, without refreshing the remaining SPA in the browser, you can confirm that it can communicate with the backend by pressing the button and it will alert received data.
Question
What are the possible ways/approaches to glue these two targets so when I run my application: JS target is assembled and served via JVM backend conveniently?
I am thinking that perhaps Gradle should trigger some of the Kotlin browser tasks and then make them available in some way for the Vert.x backend.
If you'd like to run a single task, though, you need that your server task would depend on your JS compile. In your build.gradle add the following:
tasks.withType<org.jetbrains.kotlin.gradle.tasks.KotlinCompile> {
dependsOn(tasks.getByName<org.jetbrains.kotlin.gradle.targets.js.webpack.KotlinWebpack>("jsBrowserProductionWebpack"))
}
Now invoking run will also invoke WebPack.
Next you want to serve your files. There are different ways of doing it. One is to copy them to Vert.x resources directory using Gradle. Another is to point Vert.x to where WebPack puts them by default:
route().handler(StaticHandler.create("../../../distributions"))
There is a bunch of different things going on there.
First, both your Vert.x and Webpack run on the same port. Easiest way to fix that is to start Vert.x on some other port, like 18080:
.listen(18080, "localhost") { result ->
And then change your index.kt file to use that port:
val result: SomeData = get("http://localhost:18080/data")
Because we run on different ports now, we also need to specify CORS handler:
router.apply {
route().handler(CorsHandler.create("*"))
Last is the fact, that you cannot run two neverending Gradle tasks from the same process (ok, you can, but that's complicated). So what I suggest is that you open two Terminals, and run:
./gradlew run
In one, and
./gradlew jsBrowserDevelopmentRun
In another.
Having done all that, you should see this:
Now, this is for development mode. For production mode, you probably don't want to run jsBrowserDevelopmentRun, but tie jsBrowserProductionWebpack to your run and server spa.js from your Vert.x app using StaticHandler. But this answer is already too long.

Getting Error while adding File System ISE Logic App Connector

I have created an ISE Logic Apps environment, and am trying to add the FileSystem ISE connector to the Managed connectors list, as it does not appear in my default list. When I click on '+Add', I can see the File System connector in the drop down that comes up, but when I select it & click on 'Create', I get the following error:
Failed to create connector 'isefilesystem'
Operation name
Set Integration Service Environment managed API
Time stamp
Mon Jan 13 2020 16:53:24 GMT+0000 (GMT Standard Time)
Event initiated by
xxxxxxxxx#xxxx.xxx
Error code
IntergrationServiceEnvironmentManagedApiDefinitionTagsNotSupported
Message
The tags are not supported in the managed API 'isefilesystem'.
The ISE File System connector is available. The ISE File System connector will not be automatically added to the ISE, you will need to manually add it yourself. Our engineers are working on getting that automatically added with new ISE deployments.
Here are steps from the document to add it manually to the ISE :
On your ISE menu, under Settings, select Managed connectors. On the toolbar, select Add.
On the Add a new managed connector pane, open the Find connector list. Select the ISE connector that you want to use but isn't yet deployed in your ISE. Select Create.
Only ISE connectors that are eligible but not yet deployed to your ISE appear available for you to select. Connectors that are already deployed in your ISE appear unavailable for selection.
Checkout this GitHub issue as well for details.
File system connectors is not yet available in ISE. You can use the shared connector (With gateway) in a Logic app in ISE till it is available.
It is on-premises data gateway. Yes there is already a work items for this and it is in progress and will be there in near future but this timeline is subject to change. As far as new features go, we are not able to disclose much at this time.
Also you could vote up this feedback to promote this feature to be achieved quickly.
I am having problems with the ISE Create File (Preview).
I have an API Connection defined with the Root Folder setting using dot notation, e.g. 192.168.1.23, because there is a DNS issue with hybrid cloud-on-prem lookup, or so I am told.
The Logic App portal editor in Designer mode behaves strangely when configuring the folder path in the Create File action. When using the pop-out folder picker I see "The use name or password is incorrect".
I have made sure that the credentials are correct and have tested successfully via other means.
Is there a work around?
Is this a known issue?

Restrict a file to being edited in gitlab (.gitlab-ci.yml)

as you know We have a file for gitlab ci configuration named '.gitlab-ci.yml'
and this file shouldn't be edited by any developers so I decided to avoid developers to edit it.
the thing is gitlab said you can lock file to being edited but the prerequirement of this action is to have a premium account.
what can I do when I haven't premium account?
do you have any idea to lock a file to being edited?
Check if you have access to a Push Rule feature, which is a kind of pre-receive hook.
Or you can set a pre-receive hook if your GitLab server is on-premise.
In both cases, you can list the files being pushed in that hook, and fails if one of them is .gitlab-ci.yml.
As of today, the official way (~workaround~) for this seems to be creating a different repository for the .yml file with more restrict permissions and then referencing that .yml file from your project:
A .gitlab-ci.yml may contain rules to deploy an application to the production server. This deployment usually runs automatically after pushing a merge request. To prevent developers from changing the .gitlab-ci.yml, you can define it in a different repository. The configuration can reference a file in another project with a completely different set of permissions (similar to separating a project for deployments). In this scenario, the .gitlab-ci.yml is publicly accessible, but can only be edited by users with appropriate permissions in the other project.
https://docs.gitlab.com/ee/ci/environments/deployment_safety.html#protect-gitlab-ciyml-from-change
Also, there is a discussion on this matter here:
https://gitlab.com/gitlab-org/gitlab/-/issues/15632

Storing a .jks file in Fabric profile

In our Apache Camel project, we are consuming a rest service which requires a .jks file.
Currently we are storing .jks file in a physical location and referring to that in Camel project. But it can't be used always, as we may be having access to the Fuse Management Console only and not to the physical location accessible from management console.
Another option is to store key file within bundle, which is can't be employed because, certificate may change based on the environment.
In this scenario, what can be a better solution to store key file?
Note
One option about which I thought was, storing .jks file within fabric profile. But could n't find any way to do that. Is it possible to store a file in Fabric profile?
What about storing the .jks in a java package and reading it as a resource?
You bundle imports org.niyasc.jks and loads the file from there. The bundle need not to change between environments.
Then you write 2 bundles to provide the same package org.niyasc.jks, one with production file and one with test file.
Production env:
RestConsumerBundle + ProductionJksProviderBundle
Test env:
RestConsumerBundle + TestJksProviderBundle
Mind that deploying both of them may be possible and RestConsumerBundle will be bound to the first deployed bundle. You can eventually play with OSGi directives to give priority to one of them.
EDIT:
A more elegant solution would be creating an OSGi service which exposes the .jks as an InputStream or byte[]. You can even play with JNDI if you feel to.
From Blueprint declare the dependency as mandatory, so your bundle will not start if the service is not available.
<!-- RestConsumerBundle -->
<reference id="jksProvider"
interface="org.niyasc.jks.Provider"
availability="mandatory"/>
Storing the JKS files in the Fuse profile could be a good idea.
If you have a broker profile created, such as "mq-broker-Group.BrokerName", take a look at it via the Fuse Web Console.
You can then access the jks file as a resource in the property file, as in "truststore.file=profile:truststore.jks"
And also check the "Customizing the SSL keystore.jks and truststore.jks file" section of this chapter:
https://access.redhat.com/documentation/en-us/red_hat_jboss_fuse/6.3/html/fabric_guide/mq#MQ-BrokerConfig
It has some good pointers.
Regarding how to add files to a Fabric profile, you can store any resources under src/main/fabric8 and use the fabric8 Maven plugin. For more, see:
https://fabric8.io/gitbook/mavenPlugin.html
-Codrin

Exporting a Typo3 site bit by bit

(edit: I'm leaving all the mistaken assumptions in just in case someone else makes the same mistakes)
I have an ancient Typo3 3.8.1 site on a remote server. I don't have access to that server, and the team in charge of maintaining the site doesn't know who to contact to get access to the server. I do have the admin rights on that site, though. (edit: no I don't. oops.)
This is what I see in the (not) admin menu:
I'm not sure if this version supports extensions, I can't find an extension manager anywhere. (because I'm not an admin)
I want to export the site so I can host it on a server on my own domain instead. The problem is the export file is too large, I can't download it. Will I destroy the directory structure if I export a bunch of pages at a time?
If you have admin access to the backend you can try to install Quixplorer - file manager. Using it you can try to zip folders in the main directory ie. (typo3, typo3conf, fileadmin etc) one by one and download them via browser.
It's important to download and remove typo3conf.zip from the server as soon as possible, cause it contains sensitive data.
Additionally you can also install PhpMyAdmin extension (search in repository) i you haven't other MySQL client.
Edit:
If you can't use Quixplorer the only way is... to write own extension and upload it via Extension Manager, there you'll need to try perform primitive file system operations like:
(PHP)
system('zip -R t3c.zip typo3conf/');
Sometimes the server allows more memory and execution_time that the T3D Export. So, if you can change PHP files on that server, try to change typo3/sysext/impexp/class.tx_impexp.php - search for ini_set and change that settings. If the server allows, you can then create bigger t3d-files.
And you could try some shell-extensions to get hands on that server:
http://typo3.org/extensions/repository/view/phpshell
http://typo3.org/extensions/repository/view/mw_shell
http://typo3.org/extensions/repository/view/shell
But to answer your initial question: you can crate a couple of T3D-files and import them again. Just force uid if you import them - and install all needed extensions first!

Resources