Troubleshooting export with heredoc - export

INTRODUCTION:
I have been using this construct to set the current group after opening a terminal at a compute server:
newgrp project1_group << ANYCODE
cd ~/WORK/project1_rundir
bsub xterm &
ANYCODE
After executing this script, new terminal is opened at a compute server, in the specified project rundir, and the primary group is set correctly.
It works just fine...
PROBLEM DESCRIPTION:
Now I would like to set an environment variable at a compute server using the same construct:
export POLICYFILE=~/WORK/project1_rundir/.policyfile << ANYCODE
cd ~/WORK/project1_rundir
bsub xterm &
ANYCODE
It doesn't do anything, not even a terminal is opened.
Does anyone have an explanation, why does newgrp work and export not?
Is there a way to make this work (not necessarily using the heredoc)?

The problem is solved (even better, without heredoc)...
The final solution is implemented as follows:
cd ~/WORK/project1_rundir
bsub -I -env "all, POLICYFILE=~/WORK/project1_rundir/.policyfile" xterm &

Related

SqlPackage seems to escape right square bracket ( ] ) in variable value passed to .dacpac

I'm passing a variable to my .dacpac but the text received is not what I passed. Example command:
sqlpackage /v:TextTest="abc]123" /Action:Publish /SourceFile:"my.dacpac" /TargetDatabaseName:MyDb /TargetServerName:"."
My variable $(TextTest) comes out as "abc]]123" instead of the original "abc]123".
Is there anything I can do to prevent SqlPackage from corrupting my input variables before they are passed to the .dacpac scripts?
Unfortunately, I don't think there is a good answer. This appears to be a very old bug. I'm seeing references to this issue going back 10 years.
Example A: https://web.archive.org/web/20220831180208/https://social.msdn.microsoft.com/forums/azure/en-US/f1d153c2-8f42-4148-b313-3449075c612f/sql-server-database-project-sqlcmd-variables-with-closing-square-brackets
They mention a "workaround" in the post, but they link to a Microsoft Connect issue which no longer exists and is not available on archive.org.
My best guess is that the "workaround" is to generate the deploy script rather than publishing, and then manually modify the variable value in the script...which is not really a workaround if you are working on a build/release pipeline or any sort of automation.
I tried testing this to see if it would make any difference using Microsoft.SqlServer.Dac.DacServices.Publish() directly (via dbatools PowerShell module), but unfortunately the problem exists there as well.
I also tested it against every keyboard accessible symbol and that is the only character it seems to have a problem with.
Another option, though still not great, is to generate the deployment script, then execute it using SQLCMD.EXE.
So for example this would work:
sqlpackage /Action:Script `
/DeployScriptPath:script.sql `
/SourceFile:foobar.dacpac `
/TargetConnectionString:'Server=localhost;Database=foobar;Uid=sa;Password=yourStrong(!)Password' `
/p:CommentOutSetVarDeclarations=True
SQLCMD -S 'localhost' -d 'foobar' -U 'sa' -P 'yourStrong(!)Password' `
-i .\script.sql `
-v TextTest = "abc]123" `
-v DatabaseName = "foobar"
/p:CommentOutSetVarDeclarations=True - This setting is key, otherwise SQLCMD will be overridden by what's in the file. Just make sure you specify ALL variables, and not just the one you need. So open the file to look at what is commented out and make sure you are supplying what is needed.
It's not a great option...but it's at least scriptable and doesn't require manual intervention.

Is there any way to connect to a remote container without open a foder?

I am implementing a dev environment for Arduino an other MCUs. I have a container image with all the compilers and tool-chains required and I have an script to connect VSCode to it.
The connection magic is done by this:
CONTAINER_NAME="dev-environments-mcus"
hex=$(printf \{\"containerName\"\:\""$CONTAINER_NAME"\"\} | od -A n -t x1 | tr -d '[\n\t ]')
code --folder-uri vscode-remote://attached-container+${hex}/App_Home/mcu-projects
This works perfectly but the problem is that by doing this I am opening a specific folder in the container which is not ideal for a generic dev enviroment.
I would like to know if it is possible to replicate in cmdline the "Attach in new window" button behaviour, which open an "empty" window when you click on it.
Edit1: Replacing --folder-uri by --file-uri make my script work better but I would like to open no file or at least open the start page.
PS: Just in case you are curious this is the project github
Ok I think that I manage to solve it. I will share what I did just in case anyone find himself on the same situation.
I just had to use the option --file-uri rather than --folder-uri and append a slash / at then end of the command. Now no folder or empty file is open when VSCode starts.
This is how the script looks now:
CONTAINER_NAME="dev-environments-mcus"
hex=$(printf \{\"containerName\"\:\""$CONTAINER_NAME"\"\} | od -A n -t x1 | tr -d '[\n\t ]')
code --file-uri vscode-remote://attached-container+${hex}/

manipulating strings and use commands with the result in bash [duplicate]

I'm trying to write a small script to change the current directory to my project directory:
#!/bin/bash
cd /home/tree/projects/java
I saved this file as proj, added execute permission with chmod, and copied it to /usr/bin. When I call it by:
proj, it does nothing. What am I doing wrong?
Shell scripts are run inside a subshell, and each subshell has its own concept of what the current directory is. The cd succeeds, but as soon as the subshell exits, you're back in the interactive shell and nothing ever changed there.
One way to get around this is to use an alias instead:
alias proj="cd /home/tree/projects/java"
You're doing nothing wrong! You've changed the directory, but only within the subshell that runs the script.
You can run the script in your current process with the "dot" command:
. proj
But I'd prefer Greg's suggestion to use an alias in this simple case.
The cd in your script technically worked as it changed the directory of the shell that ran the script, but that was a separate process forked from your interactive shell.
A Posix-compatible way to solve this problem is to define a shell procedure rather than a shell-invoked command script.
jhome () {
cd /home/tree/projects/java
}
You can just type this in or put it in one of the various shell startup files.
The cd is done within the script's shell. When the script ends, that shell exits, and then you are left in the directory you were. "Source" the script, don't run it. Instead of:
./myscript.sh
do
. ./myscript.sh
(Notice the dot and space before the script name.)
To make a bash script that will cd to a select directory :
Create the script file
#!/bin/sh
# file : /scripts/cdjava
#
cd /home/askgelal/projects/java
Then create an alias in your startup file.
#!/bin/sh
# file /scripts/mastercode.sh
#
alias cdjava='. /scripts/cdjava'
I created a startup file where I dump all my aliases and custom functions.
Then I source this file into my .bashrc to have it set on each boot.
For example, create a master aliases/functions file: /scripts/mastercode.sh
(Put the alias in this file.)
Then at the end of your .bashrc file:
source /scripts/mastercode.sh
Now its easy to cd to your java directory, just type cdjava and you are there.
You can use . to execute a script in the current shell environment:
. script_name
or alternatively, its more readable but shell specific alias source:
source script_name
This avoids the subshell, and allows any variables or builtins (including cd) to affect the current shell instead.
Jeremy Ruten's idea of using a symlink triggered a thought that hasn't crossed any other answer. Use:
CDPATH=:$HOME/projects
The leading colon is important; it means that if there is a directory 'dir' in the current directory, then 'cd dir' will change to that, rather than hopping off somewhere else. With the value set as shown, you can do:
cd java
and, if there is no sub-directory called java in the current directory, then it will take you directly to $HOME/projects/java - no aliases, no scripts, no dubious execs or dot commands.
My $HOME is /Users/jleffler; my $CDPATH is:
:/Users/jleffler:/Users/jleffler/mail:/Users/jleffler/src:/Users/jleffler/src/perl:/Users/jleffler/src/sqltools:/Users/jleffler/lib:/Users/jleffler/doc:/Users/jleffler/work
Use exec bash at the end
A bash script operates on its current environment or on that of its
children, but never on its parent environment.
However, this question often gets asked because one wants to be left at a (new) bash prompt in a certain directory after execution of a bash script from within another directory.
If this is the case, simply execute a child bash instance at the end of the script:
#!/usr/bin/env bash
cd /home/tree/projects/java
echo -e '\nHit [Ctrl]+[D] to exit this child shell.'
exec bash
To return to the previous, parental bash instance, use Ctrl+D.
Update
At least with newer versions of bash, the exec on the last line is no longer required. Furthermore, the script could be made to work with whatever preferred shell by using the $SHELL environment variable. This then gives:
#!/usr/bin/env bash
cd desired/directory
echo -e '\nHit [Ctrl]+[D] to exit this child shell.'
$SHELL
I got my code to work by using. <your file name>
./<your file name> dose not work because it doesn't change your directory in the terminal it just changes the directory specific to that script.
Here is my program
#!/bin/bash
echo "Taking you to eclipse's workspace."
cd /Developer/Java/workspace
Here is my terminal
nova:~ Kael$
nova:~ Kael$ . workspace.sh
Taking you to eclipe's workspace.
nova:workspace Kael$
simply run:
cd /home/xxx/yyy && command_you_want
When you fire a shell script, it runs a new instance of that shell (/bin/bash). Thus, your script just fires up a shell, changes the directory and exits. Put another way, cd (and other such commands) within a shell script do not affect nor have access to the shell from which they were launched.
You can do following:
#!/bin/bash
cd /your/project/directory
# start another shell and replacing the current
exec /bin/bash
EDIT: This could be 'dotted' as well, to prevent creation of subsequent shells.
Example:
. ./previous_script (with or without the first line)
On my particular case i needed too many times to change for the same directory.
So on my .bashrc (I use ubuntu) i've added the
1 -
$ nano ~./bashrc
function switchp
{
cd /home/tree/projects/$1
}
2-
$ source ~/.bashrc
3 -
$ switchp java
Directly it will do: cd /home/tree/projects/java
Hope that helps!
It only changes the directory for the script itself, while your current directory stays the same.
You might want to use a symbolic link instead. It allows you to make a "shortcut" to a file or directory, so you'd only have to type something like cd my-project.
You can combine Adam & Greg's alias and dot approaches to make something that can be more dynamic—
alias project=". project"
Now running the project alias will execute the project script in the current shell as opposed to the subshell.
You can combine an alias and a script,
alias proj="cd \`/usr/bin/proj !*\`"
provided that the script echos the destination path. Note that those are backticks surrounding the script name.
For example, your script could be
#!/bin/bash
echo /home/askgelal/projects/java/$1
The advantage with this technique is that the script could take any number of command line parameters and emit different destinations calculated by possibly complex logic.
to navigate directories quicky, there's $CDPATH, cdargs, and ways to generate aliases automatically
http://jackndempsey.blogspot.com/2008/07/cdargs.html
http://muness.blogspot.com/2008/06/lazy-bash-cd-aliaes.html
https://web.archive.org/web/1/http://articles.techrepublic%2ecom%2ecom/5100-10878_11-5827311.html
In your ~/.bash_profile file. add the next function
move_me() {
cd ~/path/to/dest
}
Restart terminal and you can type
move_me
and you will be moved to the destination folder.
You can use the operator && :
cd myDirectory && ls
While sourcing the script you want to run is one solution, you should be aware that this script then can directly modify the environment of your current shell. Also it is not possible to pass arguments anymore.
Another way to do, is to implement your script as a function in bash.
function cdbm() {
cd whereever_you_want_to_go
echo "Arguments to the functions were $1, $2, ..."
}
This technique is used by autojump: http://github.com/joelthelion/autojump/wiki to provide you with learning shell directory bookmarks.
You can create a function like below in your .bash_profile and it will work smoothly.
The following function takes an optional parameter which is a project.
For example, you can just run
cdproj
or
cdproj project_name
Here is the function definition.
cdproj(){
dir=/Users/yourname/projects
if [ "$1" ]; then
cd "${dir}/${1}"
else
cd "${dir}"
fi
}
Dont forget to source your .bash_profile
This should do what you want. Change to the directory of interest (from within the script), and then spawn a new bash shell.
#!/bin/bash
# saved as mov_dir.sh
cd ~/mt/v3/rt_linux-rt-tools/
bash
If you run this, it will take you to the directory of interest and when you exit it it will bring you back to the original place.
root#intel-corei7-64:~# ./mov_dir.sh
root#intel-corei7-64:~/mt/v3/rt_linux-rt-tools# exit
root#intel-corei7-64:~#
This will even take you to back to your original directory when you exit (CTRL+d)
I did the following:
create a file called case
paste the following in the file:
#!/bin/sh
cd /home/"$1"
save it and then:
chmod +x case
I also created an alias in my .bashrc:
alias disk='cd /home/; . case'
now when I type:
case 12345
essentially I am typing:
cd /home/12345
You can type any folder after 'case':
case 12
case 15
case 17
which is like typing:
cd /home/12
cd /home/15
cd /home/17
respectively
In my case the path is much longer - these guys summed it up with the ~ info earlier.
As explained on the other answers, you have changed the directory, but only within the sub-shell that runs the script. this does not impact the parent shell.
One solution is to use bash functions instead of a bash script (sh); by placing your bash script code into a function. That makes the function available as a command and then, this will be executed without a child process and thus any cd command will impact the caller shell.
Bash functions :
One feature of the bash profile is to store custom functions that can be run in the terminal or in bash scripts the same way you run application/commands this also could be used as a shortcut for long commands.
To make your function efficient system widely you will need to copy your function at the end of several files
/home/user/.bashrc
/home/user/.bash_profile
/root/.bashrc
/root/.bash_profile
You can sudo kwrite /home/user/.bashrc /home/user/.bash_profile /root/.bashrc /root/.bash_profile to edit/create those files quickly
Howto :
Copy your bash script code inside a new function at the end of your bash's profile file and restart your terminal, you can then run cdd or whatever the function you wrote.
Script Example
Making shortcut to cd .. with cdd
cdd() {
cd ..
}
ls shortcut
ll() {
ls -l -h
}
ls shortcut
lll() {
ls -l -h -a
}
If you are using fish as your shell, the best solution is to create a function. As an example, given the original question, you could copy the 4 lines below and paste them into your fish command line:
function proj
cd /home/tree/projects/java
end
funcsave proj
This will create the function and save it for use later. If your project changes, just repeat the process using the new path.
If you prefer, you can manually add the function file by doing the following:
nano ~/.config/fish/functions/proj.fish
and enter the text:
function proj
cd /home/tree/projects/java
end
and finally press ctrl+x to exit and y followed by return to save your changes.
(NOTE: the first method of using funcsave creates the proj.fish file for you).
You need no script, only set the correct option and create an environment variable.
shopt -s cdable_vars
in your ~/.bashrc allows to cd to the content of environment variables.
Create such an environment variable:
export myjava="/home/tree/projects/java"
and you can use:
cd myjava
Other alternatives.
Note the discussion How do I set the working directory of the parent process?
It contains some hackish answers, e.g.
https://stackoverflow.com/a/2375174/755804 (changing the parent process directory via gdb, don't do this) and https://stackoverflow.com/a/51985735/755804 (the command tailcd that injects cd dirname to the input stream of the parent process; well, ideally it should be a part of bash rather than a hack)
It is an old question, but I am really surprised I don't see this trick here
Instead of using cd you can use
export PWD=the/path/you/want
No need to create subshells or use aliases.
Note that it is your responsibility to make sure the/path/you/want exists.
I have to work in tcsh, and I know this is not an elegant solution, but for example, if I had to change folders to a path where one word is different, the whole thing can be done in the alias
a alias_name 'set a = `pwd`; set b = `echo $a | replace "Trees" "Tests"` ; cd $b'
If the path is always fixed, the just
a alias_name2 'cd path/you/always/need'
should work
In the line above, the new folder path is set
This combines the answer by Serge with an unrelated answer by David. It changes the directory, and then instead of forcing a bash shell, it launches the user's default shell. It however requires both getent and /etc/passwd to detect the default shell.
#!/usr/bin/env bash
cd desired/directory
USER_SHELL=$(getent passwd <USER> | cut -d : -f 7)
$USER_SHELL
Of course this still has the same deficiency of creating a nested shell.

Error passing multiple commands to Cisco CLI via plink

I've gotten some help with an earlier part of this batch file, but now I'm having trouble with the final component.
I've tried a few things with no success. I tried changing the CRLF to LF which did nothing. I also tried rephrasing the commands a few ways but I am still not getting anywhere. The following is my main batch file.
#echo on
REM delete deauth command file
SET OutFile="C:\temp\Out2.txt"
IF EXIST "%OutFile%" DEL "%OutFile%"
plink -v -ssh *#x.x.x.x -pw PW -m "c:\temp\WirelessDump.txt" > "C:\temp\output.txt"
setlocal
for /f %%a in (C:\temp\output.txt) do >> "Out2.txt" echo wir cli mac-address %%a deauth forced
REM Use commands in out2 to deauth
plink -v -ssh *#x.x.x.x -pw PW -m "c:\temp\Out2.txt"
pause
Below this sentence is the command found in Out2 which I think is giving the actual trouble. The number of lines varies but they are all this particular command just with differing MACs.
wir cli mac-address xxxx.xxxx.xxxx deauth forced
If Out2 has only a single line it runs fine, no issues. But when there are multiple lines, it fails with an error stating that the Line has an invalid autocommand. It's almost as if it was reading it as one contiguous command. As I mentioned above I changed from CRLF to LF hoping IOS would like it better, but that failed. I've tried adding extra lines between the commands, and I've tried calling the login every time from that file.
I am hoping that there is a way to tailor the commands to pass all lines one at a time to keep this down to a minimum of files.
I had another thought but it is kinda/very clunky. If there was a way to output each of those MAC deauth commands to their own file in a saperate folder (out1, out2, out3), and have the BAT able to run all the randomly generated files in that folder so that each one is a separated plink session.
Let me know if I need to change/add/elaborate on anything. Thanks in advance for anything you guys are willing to help with. I appreciate it.
EDIT: Martin has pointed out what the limitation actually is. It appears to be a limitation on Cisco to accept blocks of commands through SSH. So I still have the same question really, I just need some help figuring a workaround to this issue. I'm thinking the multiple file solution I mentioned above may have some possibility. But I'm too much of a noob to know how to make that work. I'll update if I have any breakthroughs though. Thanks for any contributions!
It's actually a known limitation of Cisco, that it does not support multiple commands in an SSH "exec" channel command.
Quoting section 3.8.3.6 -m: read a remote command or script from a file of PuTTY/Plink manual:
With some servers (particularly Unix systems), you can even put multiple lines in this file and execute more than one command in sequence, or a whole shell script; but this is arguably an abuse, and cannot be expected to work on all servers. In particular, it is known not to work with certain ‘embedded’ servers, such as Cisco routers.
Though you can probably still feed multiple commands to Plink input:
(
echo command 1
echo command 2
echo command 3
echo exit
) | plink -v -ssh user#host -pw password > output.txt
Or you can simply use an input file:
plink -v -ssh user#host -pw password < input.txt > output.txt
Similar question: A way of typing multiple commands in cmd.txt file using PuTTY batch against Cisco
This works without cmd.exe and using files:
function Invoke-PlinkCommandsIOS {
param (
[Parameter(Mandatory=$true)][string] $Host,
[Parameter(Mandatory=$true)][System.Management.Automation.PSCredential] $Credential,
[Parameter(Mandatory=$true)][string] $Commands,
[Switch] $ConnectOnceToAcceptHostKey = $false
)
$PlinkPath="$PSScriptRoot\plink.exe"
$commands | & "$PSScriptRoot\plink.exe" -ssh -2 -l $Credential.GetNetworkCredential().username -pw "$($Credential.GetNetworkCredential().password)" $Host -batch
}
Usage: dont forget your exit's and terminal length 0 or it will hang
PS C:\> $Command = "terminal lenght 0
>> show running-config
>> exit
>> "
>>
PS C:\> Invoke-PlinkCommandsIOS -Host ace-dc1 -Credential $cred -Commands $Command
....
Sounds like your file 'Out2.txt' has only LF at end of line. Simple way to convert that to CRLF is to use MORE command and redirect output to a new file and then use the new file.
more Out2.txt > Out2CRLF.txt
I ran into the same issue when trying to pull the full list of ACLs on an ASA via plink in powershell.
Essentially, due to the abuse issue referenced in the documentation: https://the.earth.li/~sgtatham/putty/0.72/htmldoc/Chapter3.html#using-cmdline-m, I was getting inconsistent results in pulling the ACLs. Sometimes I would get 0, sometimes only 1 or 2, and sometimes I would get all of them. (I personally, had about a 1 in 5 success rate).
As I would occasionally be successful I used a while loop that would catch the unsuccessful attempts and retry. Just be sure to put some timing on the while loop to prevent it from spamming ssh connections too much.
It is not a good solution, but it worked as a last resort.

How to run command within a prompt set by shell script

This is the process we perform manually.
$ sudo su - gvr
[gvr/DB:DEV3FXCU]/home/gvr>
$ ai_dev.env
Gateway DEV3 $
$ gw_report integrations long
report is ******
Now i am attempting to automate this process using a shell script:
#!/bin/ksh
sudo su - gvr
. ai_dev3.env
gw_report integrations long
but this is not working. Getting stuck after entering the env.
Stuck at this place (Gateway DEV3 $)
You're not running the same commands in the two examples - gw_report long != gw_report integrations long. Maybe the latter takes much longer (or hangs).
Also, in the original code you run ai_dev.env and in the second you source it. Any variables set when running a script are gone when returning from that script, so I suspect this accounts for the different behavior.

Resources