I want to implement oracle streams in different schema name..for example
schema1.jobs to schema2.jobs
because most of people give example in same schema..like scott.emp to scott.emp :(
anybody have any advice and thread ?
thank you so much :)
you must configure the apply process. To do this you should add rules to the rule set. With this configuration, the apply process dequeues the LCR (Logical Change Record) events and applies all changes to the destination schema. In order to do this, execute the following in the destination DB as strmadmin user:
SQL> begin
dbms_streams_adm.add_schema_rules (
schema_name => 'XXX',
streams_type => 'apply',
streams_name => 'apply_strm',
queue_name => 'capture_Downstream',
include_dml => true,
include_ddl => true,
source_database => 'SOURCE_GLOBAL_NAME');
end;
/
You should adjust the parameters based on your case. See the https://docs.oracle.com/cd/B10501_01/appdev.920/a96612/d_strm_2.htm it is for 9.2
Related
I have a Logstash pipeline that fetches data from MS SQL view that joins to tables A and B and put the denormalised data into ES.
Initially, INSERTS or UPDATES could happen only for table A. Therefore, to configure Logstash to pick up only newly inserted or updated records since last iteration of the polling loop, I have defined the tracking_column field which refers updatedDate timestamp column in table A:
jdbc {
#Program Search
jdbc_connection_string => "jdbc:sqlserver://__DB_LISTNER__"
jdbc_user => “admin”
jdbc_password => “admin”
jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
jdbc_driver_library => "/usr/share/logstash/drivers/mssql-jdbc-6.2.2.jre8.jar"
sql_log_level => "info"
tracking_column => "updated_date_timestamp"
use_column_value => true
tracking_column_type => "timestamp"
schedule => "*/1 * * * *"
statement => "select *, updateDate as updated_date_timestamp from dbo.MyView where (updateDate > :sql_last_value and updateDate < getdate()) order by updateDate ASC"
last_run_metadata_path => "/usr/share/logstash/myfolder/.logstash_jdbc_last_run"
}
Now, the UPDATES can also happen in table B. With this new requirement I am confused how can I configure Logstash to track changes on the table B as well in the same pipeline. Can I define multiple tracking_columns for the same pipeline?
Another two options I have in mind but not sure about them are:
Generate a composite value from updateDate fields of table A and B, that will be referenced by the tracking_column. But I am not sure how the SQL query should look like then?
Create another pipeline that will track changes for table B only. Though, the drawback, I see for this approach, is that the existing and new pipelines will do duplicate work on the initial iterations in order to process all the records from the DB view.
Please, advise me how should I go from here?
I found this ES discussion that suggests to use a function to select greatest value of provided dates in the SQL query. For the SQL server there is GREATEST function, but it is not recognised by SQL server I am currently using. Long story short, as a workaround I found iff() function which I use for dates comparing. So my SQL query looks like this:
select *, iif(A.updatedDate>B.updatedDate, A.updatedDate, B.updatedDate) as updated_date_timestamp from dbo.MyView where (iif(A.updatedDate>B.updatedDate, A.updatedDate, B.updatedDate) > :sql_last_value and iif(A.updatedDate>B.updatedDate, A.updatedDate, B.updatedDate) < getdate()) order by iif(A.updatedDate>B.updatedDate, A.updatedDate, B.updatedDate) ASC, id ASC
I am new to oracle database. I work on 12c version oracle database which is hosted in linux platform. I have to whitelist a list of ip addresses to access the oracle database.
Example: Below are the server details and i need to add my ipaddress to connect to the database
(PROTOCOL = TCP)(HOST = 192.168.56.122) (PORT = 1521)
kishan 192.108.10.132 xyz#gmail.com
I have gone through these documents but it was not quite helpful. Any help would be much appreciated!
https://docs.oracle.com/en/cloud/paas/casb-cloud/palug/putting-ip-addresses-blacklists-or-whitelists.html#GUID-17060E3D-D8B6-41F1-AAEB-9CC3F4D7B670
https://docs.oracle.com/en/cloud/paas/exadata-express-cloud/csdbp/configure-ip-whitelist-policy.html
Looks like you're looking for ACL (Access Control List). Here's an example:
Create ACL:
BEGIN
DBMS_NETWORK_ACL_ADMIN.create_acl (
acl => 'kishan.xml',
description => 'HTTP Access',
principal => 'KISHAN', -- user in your database
is_grant => TRUE,
privilege => 'connect',
start_date => NULL,
end_date => NULL);
END;
/
Assign ACL:
BEGIN
DBMS_NETWORK_ACL_ADMIN.assign_acl (acl => 'kishan.xml',
HOST => '192.108.10.132',
lower_port => NULL,
upper_port => NULL);
END;
/
Add privilege
BEGIN
-- TRAFOGLED
DBMS_NETWORK_ACL_ADMIN.add_privilege (acl => 'kishan.xml',
principal => 'KISHAN',
is_grant => TRUE,
privilege => 'connect',
start_date => NULL,
end_date => NULL);
DBMS_NETWORK_ACL_ADMIN.add_privilege (acl => 'kishan.xml',
principal => 'KISHAN',
is_grant => TRUE,
privilege => 'resolve',
start_date => NULL,
end_date => NULL);
END;
/
COMMIT;
After you've done all that, user KISHAN should have access to 192.108.10.132. If there are other users that should gain the same access, just add them into the "add privilege" script.
ACLs as described by #Littlefoot control access from within the database to external resources (e.g. a PL/SQL stored procedure accessing a web service or e-mail server). If you're talking about whitelisting database clients, connecting to the DB from other hosts, there are a couple of options, but be careful not to work yourself into a corner in terms of administrative overhead. It is very important to consider what is the actual problem you are trying to solve.
You can use
the host server's local firewall (e.g. iptables, firewall1, etc.) to restrict access to port 1521 (or whatever port you're using);
the TCP.INVITED_NODES parameter in sqlnet.ora (see here: https://docs.oracle.com/en/database/oracle/oracle-database/19/netrf/parameters-for-the-sqlnet.ora.html#GUID-897ABB80-64FE-4F13-9F8C-99361BB4465C);
or use Oracle Connection Manager if you have an Enterprise Edition database.
In general I wouldn't restrict to anything more narrow than a subnet, though. The reason for that is that there isn't any good way to do it more precisely: IP addresses tend to change frequently with DHCP, which could result in a user being unintentionally locked out, and they can be easily spoofed by bad actors. Tracking each individual IP is an administrative nightmare, too.
See these articles I wrote up last year for more detail and some of the important questions to consider:
https://pmdba.wordpress.com/2020/02/18/how-to-limit-a-user-connection-to-a-specific-ip-address/
https://pmdba.files.wordpress.com/2013/03/deploying-an-oracle-11gr2-connection-manager.pdf
I use a union to join two datasets and then the following query to setup for pagination correctly
$paginationQuery = $this->find('all')
->contain(['EmailAddresses' => [
'foreignKey' => false,
'queryBuilder' => function($q) {
return $q->where(['Members__id' => 'EmailAddresses.member_id']);
}
]])
->select( $selectMainUnion )
->from([$this->getAlias() => $query])
->order(['Members__last_name' => 'ASC', 'Members__first_name' => 'ASC']);
I have also tried
$paginationQuery = $this->find('all')
->contain(['EmailAddresses'])
->select( $selectMainUnion )
->from([$this->getAlias() => $query])
->order(['Members__last_name' => 'ASC', 'Members__first_name' => 'ASC']);
and tried
$query->loadInto($query, ['EmailAddresses']); where $query is the result of the union.
Neither of these result in email addresses added to $paginationQuery.
Is there a way to do this?
Adding to clarify the code
$selectMain =['Members.id',
'Members.member_type',
'Members.first_name',
'Members.middle_name',
'Members.last_name',
'Members.suffix',
'Members.date_joined'];
foreach($selectMain as $select) {
$selectMainUnion[] = str_replace('.', '__', $select);
}
$this->hasMany('EmailAddresses', [
'foreignKey' => 'member_id',
'dependent' => true,
]);
Looking at the SQL in DebugKit SQL Log, there is no reference to the EmailAddresses table.
Generally containments do work fine irrespective of the queries FROM clause, whether that's a table or a subquery should be irrelevant. The requirement for this to work however is that the required primary and/or foreign key fields are being selected, and that they are in the correct format.
By default CakePHP's ORM queries automatically alias selected fields, ie they are being selected like Alias.field AS Alias__field. So when Alias is a subquery, then Alias.field doesn't exist, you'd have to select Alias.Alias__field instead. So with the automatic aliases, your select of Members__id would be transformed to Members.Members__id AS Members__Members__id, and Members__Members__id is not something the ORM understands, it would end up as Members__id in your entities, where the eager loader would expect id instead, ie the name of the primary key which is used to inject the results of the queried hasMany associated records (this happens in a separate query), your custom queryBuilder won't help with that, as the injecting happens afterwards on PHP level.
Long story short, to fix the problem, you can either change how the fields of the union queries are selected, ie ensure that they are not selected with aliases, that way the pagination query fields do not need to be changed at all:
$fields = $table->getSchema()->columns();
$fields = array_combine($fields, $fields);
$query->select($fields);
This will create a list of fields in the format of ['id' => 'id', ...], looks a bit whacky, but it works (as long as there's no ambiguity because of joined tables for example), the SQL would be like id AS id, so your pagination query can then simply reference the fields like Members.id.
Another way would be to select the aliases of the subquery, ie not just select Member__id, which the ORM turns into Member__Member__id when it applies automatic aliasing, but use Members.Member__id, like:
[
'Member__id' => 'Members.Member__id',
// ...
]
That way no automatic aliasing takes place, on SQL level it would select the field like Members.Member__id AS Member__id, and the field would end up as id in your entities, which the eager loader would find and could use for injecting the associated records.
I'm a little surprised I haven't found any information on the following question, so please excuse if I've missed it somewhere in the docs. Using SQL Server (2016 locally and Azure) and EFCore Code First we're trying to create a computed table column with a persisted value. Creating the column works fine, but I don't have a clue how to persist the value. Here's what we do:
modelBuilder.Entity<SomeClass>(entity =>
{
entity.Property(p => p.Checksum)
.HasComputedColumnSql("(checksum([FirstColumnName], [SecondColumnName]))");
});
And here is what we'd actually like to get in T-SQL:
CREATE TABLE [dbo].[SomeClass]
(
[FirstColumnName] [NVARCHAR](10)
, [SecondColumnName] [NVARCHAR](10)
, [Checksum] AS (CHECKSUM([FirstColumnName], [SecondColumnName])) PERSISTED
);
Can anyone point me in the right direction?
Thanks in advance, Tobi
UPDATE: Based on a good idea by #jeroen-mostert I also tried to just pass the PERSISTED string as part of the formula:
modelBuilder.Entity<SomeClass>(entity =>
{
entity.Property(p => p.Checksum)
.HasComputedColumnSql("(checksum([FirstColumnName], [SecondColumnName]) PERSISTED)");
});
And also outside of the parentheses:
modelBuilder.Entity<SomeClass>(entity =>
{
entity.Property(p => p.Checksum)
.HasComputedColumnSql("(checksum([FirstColumnName], [SecondColumnName])) PERSISTED");
});
However und somehow surprisingly, the computed column is still generated with Is Persisted = No, so the PERSISTED string simply seems to be ignored.
Starting with EF Core 5, the HasComputedColumnSql method has a new optional parameter bool? stored to specify that the column should be persisted:
modelBuilder.Entity<SomeClass>()
.Property(p => p.Checksum)
.HasComputedColumnSql("checksum([FirstColumnName], [SecondColumnName])", stored: true);
After doing some reading and some tests, I ended up trying the PERSISTED inside the SQL query and it worked.
entity.Property(e => e.Duration_ms)
.HasComputedColumnSql("DATEDIFF(MILLISECOND, 0, duration) PERSISTED");
The generated migration was the following:
migrationBuilder.AddColumn<long>(
name: "duration_ms",
table: "MyTable",
nullable: true,
computedColumnSql: "DATEDIFF(MILLISECOND, 0, duration) PERSISTED");
To check on the database whether it is actually persisted I ran the following:
select is_persisted, name from sys.computed_columns where is_persisted = 1
and the column that I've created is there.
" You may also specify that a computed column be stored (sometimes called persisted), meaning that it is computed on every update of the row, and is stored on disk alongside regular columns:"
modelBuilder.Entity<SomeClass>(entity =>
{
entity.Property(p => p.Checksum)
.HasComputedColumnSql("(checksum([FirstColumnName], [SecondColumnName]), stored: true);
});
This is taken (and slightly modified) from Microsoft Docs.: https://learn.microsoft.com/en-us/ef/core/modeling/generated-properties?tabs=data-annotations#computed-columns
We are accessing the some objects in Schema2 present in database 2 using database link from Schema1 in database1.
This specific DBlink will be used by only one application (ApplicationTest).
Currently when we query the V$session from database2 (target DB) the columns PROGRAM, MODULE, and CLIENT_INFO are either null or have some default values.
Clarification Required:
We want to monitor all the applications which are accessing the database2.
Is it possible to populate PROGRAM, MODULE, and CLIENT_INFO fields in v$session with some tag when accessed via DB link?
Thanks in advance for all your help!
Regards,
Ganesh
You would need to call dbms_applicaion_info on the remote machine (database 2 here) in order to populate these columns. Normally, the easiest way to do that would be to ensure that you're only accessing the database link via stored procedures and then do something like this where you set the module and action in database 2's v$session before running your query.
declare
l_cnt integer;
begin
dbms_application_info.set_module#dblink_to_2(
module_name => 'DB Link',
action_name => 'Reading over link'
);
select count(*)
into l_cnt
from table#dblink_to_2;
...
end;