When I install my agents in VOLTTRON platform, all of them are assigned the same name "Agentagent-3.0", I can change part of its name in setup.py. But, I don't know the right way to give a name to agents. Where should I set agents name?
One way to refer to each agent would be: I can use the tag command to distinguish between agents: volttron-ctl tag myTag agentUUID. Then I can refer to agents by their tag such as: volttron-ctl stop –tag myTag.
I'm sorry that I missed this question when it first came up Amin.
You need to change the Agent's VIP IDENTITY. There are several ways to do this based on your circumstances.
If you are the agent developer you can create a file called IDENTITY that contains only the desired identity in plain text. You can see an example of this in services/core/MasterDriverAgent in the VOLTTRON repository.
If you are deploying an agent and want to specify a different VIP IDENTITY you can specify the environment variable AGENT_VIP_IDENTITY in your make script. You can see a commented out example of this in scripts/core/make-listener. This method overrides the preferred identity of the agent, if any.
Related
I'm doing a PoC for Flink SQL, and I'm wondering what the proper way is to deal with credentials, for example when accessing databases and Kafka. It obviously works when I include them in the query, but sprinkling credentials all through the query isn't great.
Can I refer to a secret contained in a mounted file? Or at least an environment variable?
Is there a ${ENV} or something?
For example, I would like to supply the credentials from elsewhere:
CREATE CATALOG analytics WITH (
'type'='jdbc',
'base-url'='jdbc:postgresql://some-postgres:5432/',
'default-database'='analytics',
'username'='someuser',
'password'='somepassword'
);
Similar when creating Kafka tables.
I'm suppose that I can include an UDF that can do that, but before I go down that road I'd like to know if there is something obvious I am missing.
You can always pass your credentials as arguments to your program. You can make a properties file to define whatever you want and pass it to your Flink program.
val username = ParameterTool.fromArgs(args).getRequired("username")
Credentials can be stored in flink-conf.yaml, and accessed as configuration parameters. Note that any configuration setting containing one of these strings
"password", "secret", "fs.azure.account.key", "apikey"
will have its value obscured in the logs.
You also have the option of defining tables in one of the catalogs supported by Flink SQL.
I've worked a lot with Pentaho PDI so some obvious things jump out at me.
I'll call Connection Managers "CMs" from here on out.
Obvious, Project CMs > Package CMs, for extensability/ re-usability. Seems a rare case indeed where you need a Package-level CM.
But I'm wondering another best practice. Should each Project CM itself be composed of variables? (or parameters I guess).
Let's talk in concrete terms. There are specific database sources. Let's call two of them in use Finance2000 and ETL_Log_db. These have specific connection strings (password, source, etc).
Now if you have 50 packages pulling from Finance2000 and also using ETL_Log_db ... well ... what happens if the databases change? (host, name, user, password?)
Say it's now Finance3000.
Well I guess you can go into Finance2000 and change the source, specs, and even the name itself --- everything should work then, right?
Or should you simply build a project level database called "FinanceX" or whatever and make it comprised of parameters so the connectoin string is something like #Source + # credentials + # whatever?
Or is that simply redundant?
I can see one benefit of the parameter method is that you can change the "logging database" on the fly even within the package itself during execution, instead of passing parameters merely at runtime. I think. I don't know. I don't have a mountain of experience with SSIS yet.
SSIS, starting from version 2012, has SSIS Catalog DB. You can create all your 50 packages in one Project, and all these packages share the same Project Connection Managers.
Then you deploy this Project into the SSIS Catalog; the Project automatically exposes Connection Manager parameters with CM prefix. The CM parameters are parts of the Connection Manager definition.
In the SSIS Catalog you can create so called Environments. In the Environment you define variables with name and datatype, and store its value.
Then - the most interesting part - you can associate the Environment and the uploaded Project. This allows you to bind project parameter with environment variable.
At Package Execution - you have to specify which Environment to use when specifying Connection Strings. Yes, you can have several Environments in the Catalog, and choose when starting Package.
Cool, isn't it?
Moreover, passwords are stored encrypted, so none can copy it. Values of these Environment Variables can be configured by support engineers who has no knowledge of SSIS packages.
More Info on SSIS Catalog and Environments from MS Docs.
I'll give my fair share of experience.
I recently had a similar experience at work, our 2 main databases name's changed, and i had no issue, or downtime on the schedules.
The model we use is not the best, but for this, and for other reasons, it is quite confortable to work with. We use BAT files to pass named parameters into a "Master" Job, and basically depending on 2 parameters, the Job runs on an alternate Database/Host.
The model we use is, in every KTR/KJB we use a variable ${host} and ${dbname}, these parameters are passed with each BAT file. So when we had to change the names of the hosts and databases, it was a simple Replace All Text Match in NotePad++, and done, 2.000+ BAT Files fixed, and no downtime.
Having a variable for the Host/DB Name for both Client Connection and Logging Connection lets you have that flexibility when things change radically.
You can also use the kettle.properties file for the logging connection.
I follow this tutorial from Scott which worked, until the point when I change in my Config file some user details but those are never update in my database.
What should I do in order that all my changes from my config file to be updated in my database as well?
Here is InitializeDbTestData(app) method
For others who are interested in this topic you should take a look here
Apparently there is no way to do this just to use other tools such as AdminUI(which you have to pay for it)
OK, so I did the dumb thing and released production code (C#, VS2010) that targeted our development database (SQL Server 2008 R2). Luckily we are not using the production database yet so I didn't have the pain of trying to recover and synchronize everything...
But, I want to prevent this from happening again when it could be much more painful. My idea is to add a table I can query at startup and determine what database I am connected to by the value returned. Production would return "PROD" and dev and test would return other values, for example.
If it makes any difference, the application talks to a WCF service to access the database so I have endpoints in the config file, not actual connection strings.
Does this make sense? How have others addressed this problem?
Thanks,
Dave
The easiest way to solve this is to not have access to production accounts. Those are stored in the Machine.config file for our .net applications. In non-.net applications this is easily duplicated, by having a config file in a common location, or (dare I say) a registry entry which holds the account information.
Most of our servers are accessed through aliases too, so no one really needs to change the connection string from environment to environment. Just grab the user from the config and the server alias in the hosts file points you to the correct server. This also removes the headache from us having to update all our config files when we switch db instances (change hardware etc.)
So even with the click once deployment and the end points. You can publish the a new endpoint URI in a machine config on the end users desktop (I'm assuming this is an internal application), and then reference that in the code.
If you absolutely can't do this, as this might be a lot of work (last place I worked had 2000 call center people, so this push was a lot more difficult, but still possible). You can always have an automated build server setup which modifies the app.config file for you as a last step of building the application for you. You then ALWAYS publish the compiled code from the automated build server. Never have the change in the app.config for something like this be a manual step in the developer's process. This will always lead to problems at some point.
Now if none of this works, your final option (done this one too), which I hated, but it worked is to look up the value off of a mapped drive. Essentially, everyone in the company has a mapped drive to say R:. This is where you have your production configuration files etc. The prod account people map to one drive location with the production values, and the devs etc. map to another with the development values. I hate this option compared to the others, but it works, and it can save you in a pinch with others become tedious and difficult (due to say office politics, setting up a build server etc.).
I'm assuming your production server has a different name than your development server, so you could simply SELECT ##SERVERNAME AS ServerName.
Not sure if this answer helps you in a assumed .net environment, but within a *nix/PHP environment, this is how I handle the same situation.
OK, so I did the dumb thing and released production code
There are a times where some app behavior is environment dependent, as you eluded to. In order to provide this ability to check between development and production environments I added the following line to global /etc/profile/profile.d/custom.sh config (CentOS):
SERVICE_ENV=dev
And in code I have a wrapper method which will grab an environment variable based on name and localize it's value making it accessible to my application code. Below is a snippet demonstrating how to check the current environment and react accordingly (in PHP):
public function __call($method, $params)
{
// Reduce chatter on production envs
// Only display debug messages if override told us to
if (($method === 'debug') &&
(CoreLib_Api_Environment_Package::getValue(CoreLib_Api_Environment::VAR_LABEL_SERVICE) === CoreLib_Api_Environment::PROD) &&
(!in_array(CoreLib_Api_Log::DEBUG_ON_PROD_OVERRIDE, $params))) {
return;
}
}
Remember, you don't want to pepper your application logic with environment checks, save for a few extreme use cases as demonstrated with snippet. Rather you should be controlling access to your production databases using DNS. For example, within your development environment the following db hostname mydatabase-db would resolve to a local server instead of your actual production server. And when you push your code to the production environment, your DNS will correctly resolve the hostname, so your code should "just work" without any environment checks.
After hours of wading through textbooks and tutorials on MSBuild and app.config manipulation, I stumbled across something called SlowCheetah - XML Transforms http://visualstudiogallery.msdn.microsoft.com/69023d00-a4f9-4a34-a6cd-7e854ba318b5 that did what I needed it to do in less than hour after first stumbling across it. Definitely recommended! From the article:
This package enables you to transform your app.config or any other XML file based on the build configuration. It also adds additional tooling to help you create XML transforms.
This package is created by Sayed Ibrahim Hashimi, Chuck England and Bill Heibert, the same Hashimi who authored THE book on MSBuild. If you're looking for a simple ubiquitous way to transform your app.config, web.config or any other XML fie based on the build configuration, look no further -- this VS package will do the job.
Yeah I know I answered my own question but I already gave points to the answer that eventually pointed me to the real answer. Now I need to go back and edit the question based on my new understanding of the problem...
Dave
I' assuming yout production serveur has a different ip address. You can simply use
SELECT CONNECTIONPROPERTY('local_net_address') AS local_net_address
In ASP.NET, each session can be identified by its SessionID variable. Currently, I'm working on a project for which I want to be able to identify each separate user session. In other words, I'm looking for a session identifier or an equivalent variable.
I've looked in the Application, Environment and AppDomain classes, but I couldn't find such a variable. So my question is: how should one identify the session(s) an application is currently handling?
Maybe System.Diagnostics.Process.GetCurrentProcess().Id would cover your needs? That will give you a number that uniquely identifies the currently running process on the system. The number is valid only while the process runs, and when it has quit any other process may be assigned the same number when it is started.
I'm not quite sure I follow you, but if you're trying to track each instance of the application's lifecycle, you could create a GUID as an instance member somewhere appropriate. Whenever you feel a new "session" has been created, you can create and store this GUID - probably when the user logs in (or the main form loads if you don't have a login mechanism).
I'm assuming of course you have a multi-user enviroment with some kind of server attached, otherwise I can't really see a need for sessions.
You could check some of the options in the Environment class such as Environment.UserName, Environment.MachineName or Environment.UserDomainName