How to list all parameters in Postgres? - database

I was wondering if there's a parameter for the currently authenticated psql user?
But then I wonder a more broader question - how can I just see what all the paremeters are?
I might discover some interesting parameters if I could see a whole list of them?
I'm only seeing online how to get the value of one parameter. Not a list...

Alvaro has answered the question how to list your current parameter values.
To get the authenticated user, you can call the SQL function session_user:
SELECT session_user;
The currently effective user can be seen with
SELECT current_user;
In psql, you can see details about your current database session with
\conninfo

Nonsense. Try these two SQL statements:
set foo.bar =42;
and then:
select current_setting('foo.bar');
You’ve just set, and read an entity that the PostgreSQL doc doesn’t seem to name. You might call x.y a “user-defined session parameter”. Where is its value held? Server-side, of course.
I too would like to know how to list the names of all currently defined such entities—system-defined, like TimeZone, and user-defined.
— bryn#yugabyte.com

PostgreSQL does not have such a thing as server-side session variables, so it's not clear what you are asking about.
Some PLs (such as PL/Python, PL/Perl) have session variables (%_SHARED in PL/Perl, GD and SD in PL/Python for example), but they are internal to the PL, not part of the server proper.
psql also has variables, which you can set with \set, and you can get a list with the same command. I suppose that's not what you want though.
Maybe you refer to so-called custom GUC configuration parameters, which are sometimes abused as session variables. You can get a list of those using SHOW ALL or SELECT * FROM pg_catalog.pg_settings.

SHOW ALL below can show all parameters according to the documentation:
SHOW ALL;
This is how SHOW ALL works below:
postgres=# SHOW ALL;
name | setting | description
----------------------------+-------------+------------------------------------------------------------------------------------------
allow_in_place_tablespaces | off | Allows tablespaces directly inside pg_tblspc, for testing.
allow_system_table_mods | off | Allows modifications of the structure of system tables.
application_name | psql | Sets the application name to be reported in statistics and logs.
archive_cleanup_command | | Sets the shell command that will be executed at every restart point.
archive_command | (disabled) | Sets the shell command that will be called to archive a WAL file.
archive_mode | off | Allows archiving of WAL files using archive_command.
archive_timeout | 0 | Forces a switch to the next WAL file if a new file has not been started within N seconds.
array_nulls | on | Enable input of NULL elements in arrays.
authentication_timeout | 1min | Sets the maximum allowed time to complete client authentication.
autovacuum | on | Starts the autovacuum subprocess.
...
And, you can show one specific parameter with SHOW as shown below:
postgres=# SHOW allow_in_place_tablespaces;
allow_in_place_tablespaces
----------------------------
off
(1 row)
But, you cannot show more than one parameters with SHOW as shown below:
postgres=# SHOW allow_in_place_tablespaces, allow_system_table_mods;
ERROR: syntax error at or near ","
LINE 1: show allow_in_place_tablespaces, allow_system_table_mods;
So to show more than one parameters, use SELECT FROM pg_settings below:
postgres=# SELECT name, setting, short_desc FROM pg_settings WHERE name IN ('allow_in_place_tablespaces', 'allow_system_table_mods');
name | setting | short_desc
----------------------------+---------+------------------------------------------------------------
allow_in_place_tablespaces | off | Allows tablespaces directly inside pg_tblspc, for testing.
allow_system_table_mods | off | Allows modifications of the structure of system tables.
(2 rows)
In addition, current_setting() can show one specific parameter as shown below:
postgres=# SELECT current_setting('allow_in_place_tablespaces');
current_setting
-----------------
off
(1 row)
But, you cannot show more than one parameters with current_setting() as shown below:
postgres=# SELECT current_setting('allow_in_place_tablespaces', 'allow_system_table_mods');
ERROR: invalid input syntax for type boolean: "allow_system_table_mods"
LINE 1: ...ECT current_setting('allow_in_place_tablespaces', 'allow_sys...

Related

Azure SQL metric - Alert rule not working as expected

Question: Based on the following storage space statistics of my Azure SQL Managed Instance:
What does the "storage space used (avg) 51.76k" (shown at the bottom of the image1 below) represent
the value 51.76k relates to which value in the stats table below?
Why the Question: I am asking it because whenever I create an alert with a condition: "When Storage space use is greater than x", then even if I set the value of x to be 52, 55, or even 50510, the alert gets triggered and sends me an email. But I am not making any changes in the databases in my Azure SQL managed instance.
I can see that my stats query is still returning the same values (shown in table below), and the matric view (shown in image 1 below) is also not changing. So, why the alert gets triggered 5 minutes after I create it. This happened all three times when I created an alert for threshold of 52,55, and 50510 respectively. There must be something I am not doing right because I thought the alert will only get triggered if the threshold exceeds, say, 50510.
In my Azure SQL Managed Instance, SSMS is showing my data statistics as follows:
|volume_mount_point| used_gb | available_gb | total_gb |
|----------------------------|---------------|----------|
| c:\ | 0.2 | 191.8 | 192.0 |
| http:// | 50.5 | 333.5 | 384.0 |
And the Metric view is as follows:
Details:
I am following this tutorial from MS Azure team describes how to creates an Alert Rule to send an alert by using following values (also shown in image 1 below):
Metric: Storage Space used
Condition: When Space used is greater than 1840876 MB
Now item 7 of this section of the article states: "value of 1840876 MB is used representing a threshold value of 1.8 TB". And then a value "1.84M" is shown in the bottom section of the image 1.

Creating a Postgres sequence for each foreign key as a default parameter?

I am trying to build a journal that keeps track of accounts. It's append-only, and each account should have its own sequence. For example:
sequence_nbr | account_id
1 | act_1
2 | act_1
1 | act_2
1 | act_3
2 | act 2
3 | act_1
I'd like sequence_nbr to be a permanent column in my journal table, and I'd like it to be automatically incremented, that is, when I do an insert I don't have to specify a value for it, Postgres automatically computes its correct sequence number for me.
I have tried two different ways:
Creating a sequence, but I couldn't get it to depend on the value of account_id
Creating a function as in Postgres Dynamically Create Sequences, but I can't figure out how to pass the argument to the function to create a default on the column definition for the journal table.
Is there a way to accomplish what I want in Postgres?

Bonita - How to skip tasks execution

I'm developing a bonita process to run in a Bonita Enterprise instance. I need to use a custom actor filter on each task, for selecting possible approvers.
Each actor is mapped to a role
Each user has at least one membership with a role and a group
Some users have at least two memberships with the same role and two different groups
My actor filter, based on configuration, can match users belonging to one or two groups and the role associated with the actor.
Everything is fine until here, but...
I may have no available approvers for the task, so I need to skip it. Everything seems to be fine again, but here comes my issue:
when I try to skip the task, execution fails because there is no input for the task:
2020-03-10 17:52:24.233 +0000 SEVERE: org.bonitasoft.engine.execution.work.InSessionBonitaWork THREAD_ID=89 | HOSTNAME=xxx | TENANT_ID=1 | org.bonitasoft.engine.expression.exception.SExpressionEvaluationException : "PROCESS_DEFINITION_ID=8625658402299344846 | PROCESS_NAME=xxx | PROCESS_VERSION=1.0 | PROCESS_INSTANCE_ID=2003 | ROOT_PROCESS_INSTANCE_ID=2003 | FLOW_NODE_DEFINITION_ID=7701602949247176053 | FLOW_NODE_INSTANCE_ID=40011 | FLOW_NODE_NAME=Task Name | Some data were not found [inputAction]"
org.bonitasoft.engine.expression.exception.SExpressionEvaluationException: PROCESS_DEFINITION_ID=8625658402299344846 | PROCESS_NAME=xxx | PROCESS_VERSION=1.0 | PROCESS_INSTANCE_ID=2003 | ROOT_PROCESS_INSTANCE_ID=2003 | FLOW_NODE_DEFINITION_ID=7701602949247176053 | FLOW_NODE_INSTANCE_ID=40011 | FLOW_NODE_NAME=Task Name | Some data were not found [inputAction]
at org.bonitasoft.engine.expression.DataExpressionExecutorStrategy.evaluate(DataExpressionExecutorStrategy.java:112)
at org.bonitasoft.engine.expression.impl.ExpressionServiceImpl.evaluate(ExpressionServiceImpl.java:154)
at org.bonitasoft.engine.core.expression.control.api.impl.ExpressionResolverServiceImpl.evaluateExpressionsOfKind(ExpressionResolverServiceImpl.java:225)
at org.bonitasoft.engine.core.expression.control.api.impl.ExpressionResolverServiceImpl.evaluateAllExpressionsWithNoDependencies(ExpressionResolverServiceImpl.java:182)
at org.bonitasoft.engine.core.expression.control.api.impl.ExpressionResolverServiceImpl.evaluateExpressionsFlatten(ExpressionResolverServiceImpl.java:115)
at org.bonitasoft.engine.core.expression.control.api.impl.ExpressionResolverServiceImpl.evaluate(ExpressionResolverServiceImpl.java:83)
at org.bonitasoft.engine.execution.transition.TransitionConditionEvaluator.evaluateCondition(TransitionConditionEvaluator.java:44)
at org.bonitasoft.engine.execution.transition.ImplicitGatewayTransitionEvaluator.evaluateTransition(ImplicitGatewayTransitionEvaluator.java:73)
at org.bonitasoft.engine.execution.transition.ImplicitGatewayTransitionEvaluator.evaluatedTransitions(ImplicitGatewayTransitionEvaluator.java:66)
at org.bonitasoft.engine.execution.transition.ImplicitGatewayTransitionEvaluator.evaluateTransitions(ImplicitGatewayTransitionEvaluator.java:42)
at org.bonitasoft.engine.execution.TransitionEvaluator.evaluateOutgoingTransitionsForActivity(TransitionEvaluator.java:80)
at org.bonitasoft.engine.execution.TransitionEvaluator.evaluateOutgoingTransitions(TransitionEvaluator.java:66)
at org.bonitasoft.engine.execution.TransitionEvaluator.buildTransitionsWrapper(TransitionEvaluator.java:126)
at org.bonitasoft.engine.execution.ProcessExecutorImpl.executeValidOutgoingTransitionsAndUpdateTokens(ProcessExecutorImpl.java:698)
at org.bonitasoft.engine.execution.ProcessExecutorImpl.childFinished(ProcessExecutorImpl.java:588)
at org.bonitasoft.engine.execution.ContainerRegistry.nodeReachedState(ContainerRegistry.java:58)
at org.bonitasoft.engine.execution.work.NotifyChildFinishedWork.work(NotifyChildFinishedWork.java:69)
at org.bonitasoft.engine.execution.work.TxBonitaWork.lambda$work$0(TxBonitaWork.java:42)
at org.bonitasoft.engine.transaction.JTATransactionServiceImpl.executeInTransaction(JTATransactionServiceImpl.java:274)
at org.bonitasoft.engine.execution.work.TxBonitaWork.work(TxBonitaWork.java:42)
at org.bonitasoft.engine.execution.work.LockProcessInstanceWork.work(LockProcessInstanceWork.java:63)
at org.bonitasoft.engine.execution.work.failurewrapping.TxInHandleFailureWrappingWork.work(TxInHandleFailureWrappingWork.java:41)
at org.bonitasoft.engine.execution.work.failurewrapping.TxInHandleFailureWrappingWork.work(TxInHandleFailureWrappingWork.java:41)
at org.bonitasoft.engine.execution.work.failurewrapping.TxInHandleFailureWrappingWork.work(TxInHandleFailureWrappingWork.java:41)
at org.bonitasoft.engine.execution.work.InSessionBonitaWork.work(InSessionBonitaWork.java:59)
at org.bonitasoft.engine.work.BonitaThreadPoolExecutor.lambda$submit$1(BonitaThreadPoolExecutor.java:132)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
all tasks have a contract input with a text variable inputAction with values "approved" or "rejected".
I tried to
return an empty list of user IDs, but instantiation fails because the actor filter returns no users;
return an empty list of user IDs and skip the task with a "Catch error" attached to the task, but the task is not skipped (exception above);
throw a UserFilterException, but the task is not skipped (exception above);
use default action, but the task is not skipped (exception above).
Is there a way to accomplish this?
The issue was due to the use of local variables ti process approvers actions. Those variables were not created yet at the moment of failure, so It was not possibile to skip the task (bug?). Moving to pool variables solved my issue.

Load big table into web browser using react in on-demand instantiation of table row

I'm building a Excel-like table into web browser with React.js using only <div> not <table>.
Number of columns are about 90, rows are about 24000.
As we know, it is impossible to load whole data into HTML at single web page due to performance issue.
So I decided to show partial data to user using scrolling.
The main concept is simple, build HTML near user's viewport.
Guess if user is seeing 1800th to 1900th data in single viewport. I'will load only about 1750th ~ 1950th data into HTML. If user scroll up, I'll load HTML for 1700th ~ 1750th data and remove 1900th ~ 1950th data.
I think I need to manually manipulate scroll offset for getting pos where user is at. If each row's height is same as 40px and height of viewport is 1000px, then user will see 25 items at single viewport, so I need to load about 25(front) + 25(currently seeing) + 25(end) data and if user go upside or downside, I'll load additional data and remove data which far away from user.
However, I found that, requirement for my table is not matched with this situations. Here's my situation.
First, Each row's height is not same. Basically my table will show rows of row as single row. What I mean is, table single row can be looks like below,
| Photo| ProductName | Size Pool | Stock |
.... // Below are single row
+------+---------------+-------------------+------------+
| | Boots | 110-120 | 24 | // Row header (Shows Summary of child row)
+ +---------------+-------------------+------------+
| | Boots | 110 | 16 | // Row's row #1
+ +---------------+-------------------+------------+
| | Boots | 120 | 8 | // Row's row #2
+------+------------------------------------------------+
...
+------+---------------+-------------------+------------+
| |Leather Shoe | 120 | 8 | // Row can come with no header row, only single
+------+---------------+-------------------+------------+
...
Like above, if product has more than 2 options, then it merge into rows of single row and show with summary header. And if not a option product, it shows only it's row. And if content inside the row is big, it will stretch to fit the content inside
All data came from remote DataBase which retrieve data via REST API.
DataBase scheme is like below, 2 table as example.
Table #1 ProductInfo
+--------------+------------+------------+-----------+
| GroupNumber |ProductName | Size | Stock |
+--------------+------------+------------+-----------+
| 1 | Boots | 110 | 16 |
+--------------+------------+------------+-----------+
| 1 | Boots | 120 | 8 |
+--------------+------------+------------+-----------+
| 2 |Leather Shoe| 120 | 8 |
+--------------+------------+------------+-----------+
Table #2 GroupInfo
+-----------+------------+--------------+
|GroupNumber| SizePool | ImageURL |
+-----------+------------+--------------+
| 1 | 110-120 | https://abc |
+-----------+------------+--------------+
| 2 | 120 | https://def |
+-----------+------------+--------------+
And future requirements are below, (And most of them are implemented)
Sort by each columns, multi-pivot sort by row of row OR row (Handled via SQL)
Filter data by expression (Handled by client)
Hiding, resizing, change order of column(s) (Handled by client)
Interactable component inside cell like DatePicker, Pop-up etc... (Handled by client)
I succeed to create such table with page based method. But I need scrolling viewport table.
The table contains lots of dependent value column like sum, average which are not in stored in DB except for special reason (Like performance). (Most of them are handled by DB View or Procedure including sorting, calculations etc). So overall performance is really important.
I considered few questions and way to handle this, Can you check and give me a advice?
Q1. How can I decide when data should be loaded and removed and it's amount?
Data height is not consistent, so I think I cannot use scroll offset or data number as measurement criteria. (Is it possible with predictable way?)
Is it possible to archive by accessing DOM element? I'm new to Web dev. Sorry.
Q2. I can get a data from DB in 2 different ways.
Getting ProductInfo And GroupInfo seperately [<ProductInfo>,...] And [<GroupInfo>,...]
Getting Single group which object like this { group:<GroupINfo>, values:[<ProductInfo>,...] }
which is better for performance in this case in typical situations?
Q3. If I got a data like { group:<GroupINfo>, values:[<ProductInfo>,...] }, is there any problems with performance?
Like query overhead (I need to use query joined 6 times with maximum 6 depth nested SELECT query with 30 calculated columns for single data retrieval attempt. -- Pre-calculated view or table can have problems because I have many user to use it and update frequently. So I need to worry about Mutual Exclusive at least on updating.
I'm sure that above query's performance is sufficient for cropping if I got data like [<ProductInfo>,...] And [<GroupInfo>,...]. But I think later one is better. so I need to change interface if possible.
Q4. If I crop whole data from DB and structurize at the beginning, and load and remove data only for DOM, Can it be a good way?
Of course, Q1 is my primary matter, but this also seems good except for data sync with DB (Cause other user can update value while client contain outdated data)
I considered of using Infinite-Scrolling, but this is not for my case, I need perform load data and remove data at the same time. But infinite-scrolling seems dose not support removing data from viewport. Also inconsistent row height may be a problem.
I found react-virtualized and it works.
It also support dynamic resizing of row and it greatly helped

Watson Discovery Service Issue

Right Way - It's working
Wrong Way - Isn't working how should be
I'd like your help about an issue. I'm using wds and so I created a collection that was uploaded by several pieces of a manual. Once I did it, on the conversation service I also created, I put some descriptions on the intentions that the Discovery should uses. Now, when I try to identify these descriptions on the Discovery Service, unless I write exactly the same to test, it's not recognizing. Any suggestion about what can I use to fix it?
e.g. I uploaded a metadata txt file with the following fields:
+---------------------+------------+-------------+-----------------------+---------+------+
| Document | DocumentID | Chapter | Session | Title | Page |
+---------------------+------------+-------------+-----------------------+---------+------+
| Instructions Manual | BR_1 | Maintenance | Long Period of Disuse | Chassis | 237 |
+---------------------+------------+-------------+-----------------------+---------+------+
Now, when I search on the Discovery, I need to use the exactly word I put on the intention's description (Chassis). Otherwise the Discovery it's not getting through the way below:
metadata.Title:chas*|metadata.Chapter:chas*|metadata.Session:chas*
Any idea??
Please check the syntax if its right or wrong by matching it with discovery tool.
Sometimes we need inverted commas with backslash.

Resources