How to connect database in $DLC in progress openedge. for details see below image.
Thanks,
Purushottam
Databases in $DLC (the directory that Progress was installed in) are templates -- you must make a copy of the template db in some other directory in order to use it. You cannot run databases directly from $DLC.
Usually you use a command such as:
proenv> prodb sports sports
To make a local copy of the default "sports" db.
Or you can just type "prodb" and you will be prompted for the new db name and the template name. The new name can be different from the template name.
You must have to create a copy of sports database in other directory (not in openedge installation directory) using procopy or prodb command.
For Ex : in proenv
procopy Sports2000 D:\spdb Or,
prodb D:\spdb Sports2000.
Now, you can easily connect to the database...
Related
I have a Database Project with some views.
The views should behave differently depending on the environment they are published to.
When published to the development environment, the INNER JOINs should use a specific prefix for target schema name, and another prefix on the test environment.
Is it possible to achieve this? In the below code snippet, id like to use Hub when developing locally and when publishing to the dev envrionment, and ISA when published to test.
Example:
CREATE VIEW [ISA].[v_CoveredRisk]
AS SELECT
CR.Bkey_CoveredRisk_Unique
,CO.Bkey_Coverage_Unique
,CO.Name
,PO.EKey_Policy
,CoObj.Bkey_CoveredObject
,CoObj.BKey_Building
,CoObj.Bkey_Home
,CoObj.BKey_Object
,CoObj.BKey_Person
,CoObj.BKey_Pet
,CoObj.BKey_Vehicle
,Risk_Excess
,Risk_Sum
,CAST(CurrentYearPremiumAmount AS float) AS CurrentYearPremiumAmount
,IsActive
,PO.BKey_Policy
,CR.Record_Timestamp
FROM Hub.[CoveredRisk] CR
INNER JOIN Hub.Coverage CO ON CR.EKey_Coverage = CO.EKey_Coverage
INNER JOIN Hub.CoveredObject CoObj ON CR.EKey_CoveredObject = CoObj.EKey_CoveredObject
INNER JOIN Hub.[Policy] PO ON CR.EKey_Policy = PO.Ekey_Policy
The first thing is that you should remove this requirement and have the code the same in all your databases. You are almost 100% guaranteed to make a mistake at some point regarding this and deploy something that doesn't work in a different environment.
If you do want to do this, you can do it with synonyms - in your view reference a synonym and have that pointing to the respective schema. You can't get a synonym to point to a schema directly but can objects within a schema so if you have:
dev table devSchema.table
prod table prodSchema.table
in dev, have a synonym like:
create synonym Hub.table for devSchema.table
then your view reference Hub.table and it will be resolved to the dev table.
have a T4 template that generates your SQL script
in your template find out what environment you are running against... create output accordingly
to find out for which environment you need to generate the output, have a look at publishing profiles, and configuration specific variables (a.k.a. "conditional compilation symbols" in your project's build properties)
you can probe those in the T4 template
You can't use variables in schema or object name. If you really want to achieve what you say, I can suggest you 2 ways:
You will not control it with variables but you'll control it with release configurations. You can use conditional statements in the sqlproj file. So, I've created 2 views:
CREATE VIEW [HUB].[View1]
AS SELECT 1 as one;
CREATE VIEW [ISA].[View1]
AS SELECT 1 as one;
Then in sqlproj file I do following thing:
<Build Include="View1.sql" />
<None Include="View1.sql" Condition=" '$(Configuration)' == 'Debug'" />
<None Include="View1_1.sql" />
<Build Include="View1_1.sql" Condition=" '$(Configuration)' == 'Release' " />
And then just pick the right release configuration when you deploy
NOTE: You need include and exclude the same file from build to achieve that.
This is much simpler approach. Always create view with the same and move it to the proper schema in the publish script. For example:
IF DB_NAME() = 'Dev' EXEC sp_rename ....
I have an AD with 71 computers.
However, there are actually less than 50 physical computers, so I'm doing some cleanup. But, up until now when renaming PC's, I've only renamed them at the workstation through Control Panel->System.
So, the displayed name of computers in AD does not match the workstation's computer name. The displayed name, the "Canonical name of object" under Properties->Object, and the cn attribute in Attribute Editor are all the old name, while the "Computer name" and "DNS name" under Properties->General are the updated name also found at the workstation.
How do I reconcile the two different sets of names for each computer? I cannot edit the "Canonical name of object" in Properties, nor can I edit the cn attribute:
Operation failed. Error code: 0x2016; The directory service cannot
perform the requested operation on the RDN attribute of an object.
00002016: Modify of RDN 'CN' on CN=COMP,OU=TEST,DC=DOMAIN,DC=local not
permitted, must use 'rename operation instead.
Going forward, what is the proper way to rename a PC so that it updates both at the workstation and throughout AD?
There is a command line available (reference here: https://technet.microsoft.com/en-us/library/cc788029.aspx) that people will use to automate the renaming of domain-joined workstations.
It's a two-step process: first you rename the computer, and then rename its OU/CN. netdom renamecomputer doesn't rename the AD object, and I assume that Rename-Computer doesn't either (please edit this answer if that's incorrect).
PowerShell
Rename-Computer
[ Get-ADComputer | ] Rename-ADObject
CMD
netdom renamecomputer
dsmove
We have created a site for a client using Django CMS and are approaching the launch date. There are a number of links to files on their old site. Doing a search of the cmsplugin_text table, I find 12 entries that contain the URL. There is no simple mapping to the new file download URL from the old download URL, so I need to find the pages these 12 entries appear on and tell our client so they can edit the page.
But the database is not easy to follow. So how do I go from the value of the cmsplugin_ptr_id column of the cmsplugin_text column to the URL of the page? I'm fairly sure that the cmsplugin_ptr_id is meant to line up with the id of the cms_cmsplugin table. That table also has parent_id, tree_id and placeholder_id, but I've kind of got lost at this point.
I'm happy to use either the database commands directly, or to use manage.py shell to do this.
Should have tried a bit harder before answering.
The steps that worked were to look in cms_page_placeholder for lines with the placeholder_id and look up the corresponding page_id. I could then look up the page in the admin at http://mysite.com/en/admin/cms/page/page_id and that page has a "View on site link".
The SQL statement I used was:
SELECT cpp.page_id
FROM cmsplugin_text AS cpt
LEFT JOIN cms_cmsplugin AS ccp ON cpt.cmsplugin_ptr_id = ccp.id
LEFT JOIN cms_page_placeholders AS cpp ON ccp.placeholder_id = cpp.placeholder_id
WHERE cpt.body like '%userfiles%';
Where userfiles was part of the path to the files on the old site.
I am using the bulk upload code described at http://www.salesforce.com/us/developer/docs/api_asynch/.
The only difference is that i am uploading a custom object type. I can access Employee_c. But now i get a different error
stateMessage='InvalidBatch : Field name not found : First Name'
First Name is the first column in the csv.
While debugging i can see that the temp csv is being created correctly. However i get this error when checkResults executes. The code is exactly the same as in the sample java code for bulk api using REST.
I am using the free developer version of salesforce.
I created a new permission set where i have given following permissions on custom object employee:
Read/create/edit/delete/view all/modify all.
All fields are given edit permissions.
The permission set is associated with salesforce user license.
The programmatic login is with a user associated with System administrator profile , which has sales force user license.
But still the error persists!
Any pointers would be appreciated
Thanks
Sameer
Try "FirstName" without the space.
You can view the API name of any field in Setup > App Setup > Objects > (Select Your Object) > (Select Your Field). Make sure all the fields you are querying have the correct API names.
I was wondering if someone knows where I can see the data of a suspended message in the biztalk database.
I need this because about 900 messages have been suspended because of a validation and I need to edit all of them, resuming isn't possible.
I know that info of suspended messages are shown in BizTalkMsgBoxDb in the table InstancesSuspended and that the different parts of each message are shown in the table MessageParts. However I can't find the table where the actual data is stored.
Does anyone have any idea where this can be done?
I found a way to do this, there's no screwing up my system when I just want to read them.
How I did it is using the method "CompressionStreams" using Microsoft.Biztalk.Pipeline.dll.
The method to do this:
public static Stream getMsgStrm(Stream stream)
{
Assembly pipelineAssembly = Assembly.LoadFrom(string.Concat(#"<path to dll>", #"\Microsoft.BizTalk.Pipeline.dll"));
Type compressionStreamsType = pipelineAssembly.GetType("Microsoft.BizTalk.Message.Interop.CompressionStreams", true);
return (Stream)compressionStreamsType.InvokeMember("Decompress", BindingFlags.Public | BindingFlags.InvokeMethod | BindingFlags.Static, null, null, new object[] { (object)stream });
}
Then I connect with my database, fill in a dataset and stream out the data to string, code:
String SelectCmdString = "select * from dbo.Parts";
SqlDataAdapter mySqlDataAdapter = new SqlDataAdapter(SelectCmdString, "<your connectionstring">);
DataSet myDataSet = new DataSet();
mySqlDataAdapter.Fill(myDataSet, "BodyParts");
foreach (DataRow row in myDataSet.Tables["BodyParts"].Rows)
{
if (row["imgPart"].GetType() != typeof(DBNull))
{
SqlBinary binData = new SqlBinary((byte[])row["imgPart"]);
MemoryStream stm = new MemoryStream(binData.Value);
Stream aStream = getMsgStrm(stm);
StreamReader aReader = new StreamReader(aStream);
string aMessage = aReader.ReadToEnd();
//filter msg
//write msg
}
}
I then write each string to an appropriate "txt" or "xml" depending on what u want, you can also filter out certain messages with regular expression, etc.
Hope this helps anyone, it sure as hell helped me.
Greetings
Extract Messages from suspended instances
Scenario:
BizTalk 2010 and SQL 2008 R2 is the environment we have used fore this scenario.
You have problem with some integrations, 1500 suspended instances inside BizTalk and you need to send the actual messages to a customer, and then you properly do not want to manually save out this from BizTalk Administrator.
There are a lot of blogs and Internet resources pointing out vbs, powershell scripts how to do this, but I have used BizTalk Terminator to solve this kind of scenarios.
As you now BizTalk terminator is asking you 3 questions when the tool starts
I.1.All BizTalk databases are backed up?
II.2.All Host Instances is stopped?
III.3.All BizTalk SQL Agents is stopped?
This is ok when you are going to actually change something inside BizTalk databases but this is not what you are going to do in this scenario you are only using the tool to read from BizTalk databases. But you should always have backups off BizTalk databases.
You are always responsible for what you are doing, but when we have used this tools in the way I describe we have not have any problem with this scenario.
So after you have start Terminator tool please click yes to the 3 questions(you dont need to stop anything in this scenario) then connect to the correct environment please do this in your test environment first so you feel comfortable with this scenario, the next step is to choose a terminator task choose Count Instances(and save messages) after this you have to fill in the parameter TAB with correct serviceClass and Hostname and set SaveMessages to True and last set FilesaveFullPath to the correct folder you want to save the messages to.
Then you can choose to click on the Execute Button and depending of the size and how many it can take some time, after this disconnect Terminator do NOT do anything else.
You should now if you have filled in the correct values in the parameter TAB have the saved messages inside the FilesaveFullPath folder.
Download BizTalk terminator from this address:
http://www.microsoft.com/en-us/download/details.aspx?id=2846
This is more than likely not supported by Microsoft. Don't risk screwing up your system. If you have a need to have a edit and resubmit, it needs to be built into the orchestration. Otherwise, your best bet is to use WMI to write a script to:
pull out all of the suspended messages
terminate them
edit them
resubmit them
you can find it through the HAT tool you just need to specify the schema ,port and the exact date
with the exact time and it will show you the messages right click on the desired one and save .