We're using SqlPackage to generate scripts via the Script action. Does anyone know a way to get it to exclude indexes? Can't seem to find a way.
The SqlPackage reference gives several /p: properties to do with excluding a whole raft of other object types, which we are using to good effect, but not indexes. Indexes we can only tweak, not exclude, it seems. We're using SQL 2017 but the same goes for SQL 2019.
Has anyone found a way to completely exclude indexes from the script, so that they are just left as they are on the target db, the same as can be done for all the other types of SQL object?
/p: ExcludeObjectTypes=(STRING) A semicolon-delimited list of object types that should be ignored during deployment.
Valid object type names are Aggregates, ApplicationRoles, Assemblies, AsymmetricKeys, BrokerPriorities, Certificates, ColumnEncryptionKeys, ColumnMasterKeys, Contracts, DatabaseRoles, DatabaseTriggers, Defaults, ExtendedProperties, ExternalDataSources, ExternalFileFormats, ExternalTables, Filegroups, FileTables, FullTextCatalogs, FullTextStoplists, MessageTypes, PartitionFunctions, PartitionSchemes, Permissions, Queues, RemoteServiceBindings, RoleMembership, Rules, ScalarValuedFunctions, SearchPropertyLists, SecurityPolicies, Sequences, Services, Signatures, StoredProcedures, SymmetricKeys, Synonyms, Tables, TableValuedFunctions, UserDefinedDataTypes, UserDefinedTableTypes, ClrUserDefinedTypes, Users, Views, XmlSchemaCollections, Audits, Credentials, CryptographicProviders, DatabaseAuditSpecifications, DatabaseScopedCredentials, Endpoints, ErrorMessages, EventNotifications, EventSessions, LinkedServerLogins, LinkedServers, Logins, Routes, ServerAuditSpecifications, ServerRoleMembership, ServerRoles, ServerTriggers.
Please note, we know about /p: DropIndexesNotInSource=True/False and /p: IgnoreIndexOptions=True/False but these are not sufficient.
I believe you need to use a publish.xml file. See this answer for the rough idea. The question it answers has a sample file. If you set up to not include indexes via the GUI then the file should contain something like <ExcludeIndexes>True</ExcludeIndexes>.
Related
Is it considered good practice to always use fully qualified names in Snowflake worksheets ?
Asking because I sometimes see things like this:
CREATE OR REPLACE STAGE db.schema.mystage
url = 'xxxxx'
DESC STAGE db.schema.mystage
ALTER STAGE mystage
SET ...
where they don't use the fully qualified name for the ALTER STAGE because they say the contect menu for the worksheet is already set to the correct database and schema.
For me this seems inconsistent and potentially prone to error.
So is it good practice to always use fully qualified names in Snowflake worksheets ?
Yes it’s good practice to utilize fully qualified name. Particularly useful as you open multiple worksheets and avoid changing dB or schema.
As #patrick_at_snowflake mentions, this really comes down to how the scripts are being used and how you are differentiating environments. In the case where you are using databases to differentiate dev, uat, prod, etc. then it is useful to not specify the database in all of your object references. In that case, you may want to qualify schema only, so that running a script in a different environment is as simple as USE DATABASE prod_db or USE DATABASE dev_db without having to update every qualified object name.
When I am editing/experimenting/testing in the worksheet I use fully qualified names, so I can share SQL with team members and have it "just work" for them, no matter what there worksheet is pointing at.
When we run our code in prod, we replaced the database/schema names with tokens that are replaced depending on the deployment target. The nice thing about this is when you look at execution history you get fully qualified names, so can re-run queries without a lot of fiddling.
But I also would not use full names if I was writing a bug report/repro, as it db/schema is not needed, and a working example should have all the related data, imo.
I think it is always good practice to have fully qualified object names in your script, not just in the worksheet, but in general, including your UDFs and SPs.
This can help to avoid potential errors if you forget to switch between schemas or databases and data get updated into the wrong target, or reading from the wrong source.
It can help to save lots of your debugging time down the track.
I have 3 'old' Permission Sets (PS1, PS2 and PS3) which need to be merged into a Permission Set #4 (PS4).
PS1, PS2 and PS3 will be deprecated after adding its respective permissions into PS4. PS4 will remain as the future Permission Set which will gather ALL the permissions for a specific set of Users.
For now, I see that this is a very manual task ("Eye-ball" comparing each PS1, PS2, PS3 with PS4 and adding the missing permissions into PS4) and, as all manual tasks, it is prone to errors.
QUESTIONS:
Can you suggest a tool to COMPARE Permission Sets to make sure I am not missing any permission?
or (even better)
Can you suggest a tool to MERGE Permission Sets in a safe way (to mitigate risk of errors)?
or
Would you recommend a "best approach" or "best practice" for this task?
Thank you very much.
Developer way
You'd need a developer to connect with sfdx (if commandline is scary - there's VSCode editor) or similar tool and download "metadata". And then compare the XML files using something like WinMerge
https://trailhead.salesforce.com/content/learn/projects/quickstart-vscode-salesforce might help if you've never done it and don't have a developer handy.
Profiles and permission sets can be very big, what's being downloaded depends on what else you're downloading. Define "everything". If you indicate in "package.xml" that you want all objects, classes and permission sets - the permission set file should include checkboxes for "Apex Class Access", field level security, allowed record types etc - but might not include "Visualforce Page Access", tab visibilities etc because you didn't include them). There's cool plugin to VSCode for building the "package.xml" file for you, picking what you need.
Once you have that you could load them up in Winmerge (or any "diff tool" you like) and compare up to 3 files. It takes a while to get used to (you could start with comparing two, not 3).
You'll see an overview of changed lines on the left and you can decide to say make leftmost file the merged one. Go line by line and add permissions as you see them. You could then save the final file as 4th perm set and use same sfdx/vscode to deploy it.
Analyst way
If you feel like Excel guru... This data should be queryable so you could export it and crack some comparisons that way. Again - the checkboxes would be spread across different tables so you'd need to compare object rights, then field level security, then class access, then...
https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_erd_profile_permissions.htm
This would be a start
SELECT Parent.Name, SobjectType, PermissionsRead, PermissionsCreate, PermissionsEdit, PermissionsDelete
FROM ObjectPermissions
WHERE SobjectType IN ('Account', 'Contact', 'Case') AND Parent.Name IN ('PS1', 'PS2', 'PS3')
ORDER BY SobjectType, Parent.Name
It's very ungrateful job because you'd need to write formulas across rows or pivot it somehow... Also note my PS2 didn't have access to Cases at all - SF doesn't bother holding a row with all false, it just isn't there.
¥€$ way
Money solves everything, eh? Deployment & backup tools like OwnBackup, Gearset, Copado etc have something for detecting changes between projects on disk and orgs... You could rename PS2 to PS1 in another sandbox and make the tool compare them? (I'm not affiliated with any such tool vendor)
There's also https://perm-comparator.herokuapp.com/ if you're not afraid 3rd party app will get sysadmin access to your org (haven't used personally, just Googled it)
Ages ago my colleague got promising exports out of Config Workbook. Again - haven't used personally, screenshots look nice.
I have a scenario where Java developer has made the change to the variable which used to transfer the data from column - col of table - tbl.
Now, I have to change the column varchar(15) to varchar(10). But, before making this change - have to handle the existing data and the constraints/dependencies on same column.
What should be the best sequence of doing so?
I am thinking to check the constraints first, then trim the existing data and then alter the table.
Please suggest how to handle constrains/dependencies and before handling it, how to check such dependencies.
Schema-evolution (the DDL changes that happen over time to tables and columns in a database, while preserving existing data and functionality) is a well understood topic with several solutions, some of which are RDBMS independent, others are built-in to the RDBMS solution.
A key requirement for production environments is to need both a forward-change and a backout, which can be run unattended.
Many open source advocates use Liquibase which also has a commercial variant.
Db2 for Linux/Unix/Windows also offers a built-in stored-procedure SYSPROC.ALTOBJ which helps to automate various schema-evolution alterations, including decreasing the size of a column. You would need to study its documentation carefully and test it fully on non-production environments until you are satisfied. Read about it here
https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.1.0/com.ibm.db2.luw.sql.rtn.doc/doc/r0011934.html
You can grow-your-own script of course, in whatever language you prefer, including SQL, but remember you should also build and test a back-out script.
We have about 2'000 "old" objects in a sql server database (tables, views etc.) of which we don't really know if they're still in use. I want to create an extended event listener for these objects. I tried to add a giant WHERE clause to the CREATE EVENT SESSION command, consisting of 2'000 [package0].[equal_int64]([object_id], (<objectId>)) statements.
However, the command max length is 3'000 characters, so I cannot do this. And I guess that the performance of this filer wouldn't be too good, anyway...
Now my question is: I can query all possible predicates using select * from sys.dm_xe_objects where object_type= 'pred_compare'. this gives me results such as name=equal_uint64, package_guid=60AA9FBF-673B-4553-B7ED-71DCA7F5E972. the package_guid refers to sys.dm_xe_packages, where several DLLs are referenced which seem to implement a particular predicate.
Would it be possible to define my own "package" and implement a predicate there (which would filter the objectId using a hashtable)? Is it possible somehow to import such a package into SQL server so I could define a custom predicate?
Or does anyone have another idea how to implement such a filter?
I have some text data in an SQL Server 2014 table in which I want to detect complex patterns and extract certain portions of the text if the text matches the pattern. Because of this, I need capturing groups.
E.g.
From the text
"Some title, Some Journal name, vol. 5, p. 20-22"
I want to grab the volume number
, vol\. ([0-9]+), p\. [0-9]+
Mind that I have simplified this use-case to improve readability. The above use-case could be solved without capturing groups. The actual use-case handles a lot more exceptions, like:
The journal/title containing "vol.".
Volume numbers/pages containing letters
"vol" being followed by ":" or ";" instead of "."
...
The actual regex I use is the following (yet, this is not a question on regex structure, just elaborating on why I need capturing groups).
(^|§|[^a-z0-9])vol[^a-z0-9]*([a-z]?[0-9]+[a-z]?)
As far as I know, there are two ways of getting Regex functionality into SQL Server.
Through CLR: https://www.simple-talk.com/sql/t-sql-programming/clr-assembly-regex-functions-for-sql-server-by-example/ . Yet, this example (from 2009) does not support groups. Are there any commonly used solutions out there that do?
By installing Master Data Services
Since installing and setting up the entire Master Data Services package felt like overkill to get some Regex functionality, I was hoping there'd be an easy, common way out...
I have found a CLR implementation that is super easy to install, and includes Regex capturing group functions.
http://www.sqlsharp.com/
I have installed this in a separate database called 'SQL#' (simply by using the provided installation .sql script), and the functions are located inside a schema with the same name. As a result I can use the function as follows:
select SQL#.SQL#.RegEx_CaptureGroup( 'test (2005) test', '\((20[012][0-9]|19[5-9][0-9])\)', 1, NULL, 1, -1, '');
Would be nice if this was included by default in SQL Server...