Detect type of Sandbox (Salesforce) - salesforce

Salesforce supports different sandboxes.
For example "partial" or "development" sandbox.
Is there a way to detect which kind of sandbox my script is connected to?
I use Python and simple_salesforce.

My Python's not good enough. I can give hints but you'll have to experiment a bit yourself.
https://github.com/simple-salesforce/simple-salesforce "Additional features" says there's internal class that can expose to you session_id and instance.
You can use these to craft a HTTP GET call to
Authorization: Bearer {session_id}
{instance}/services/data/v51.0/limits
The "limits" resource will tell you (among others) what's the data and file storage available in this org. It'll return a JSON similar to
{
...
"DataStorageMB" : {
"Max" : 200,
"Remaining" : 196
},
...
}
Use DataStorageMB.Max and table at bottom of https://help.salesforce.com/articleView?id=sf.data_sandbox_environments.htm&type=5 to figure out where you are. 200 => Developer, 1024 => Developer Pro...
Edit - if you'd be using Apex (maybe exposed as REST service, "simple salesforce" has nice built-in to access them)
Integer storageLimit = OrgLimits.getMap().get('DataStorageMB').getLimit();
System.debug(storageLimit);
String sandboxType;
switch on storageLimit{
when 200 {
sandboxType = 'Developer';
}
when 1024 {
sandboxType = 'Developer Pro';
}
when 5120 {
sandboxType = 'Partial Copy';
}
when else {
sandboxType = 'Full Copy';
}
}
System.debug(sandboxType);

Steps to find the sandbox type:
Setup --> Deployment Settings --> continue -- > you will find the type of sandbox.

Related

Google.Cloud.AppEngine.V1 client libraries and traffic splitting in .NET

I am trying to use the Client Libraries provided by Google to move traffic from one version of an app in AppEngine to another. However, the documentation for doing this just talks about using the rest API and not the client libraries.
Here is some example code:
var servicesClient = Google.Cloud.AppEngine.V1.ServicesClient.Create();
var updateServiceRequest = new UpdateServiceRequest();
updateServiceRequest.Name = "apps/myProject/services/myService";
var updateMask = new Google.Protobuf.WellKnownTypes.FieldMask();
updateServiceRequest.UpdateMask = updateMask;
// See below for what should go here...
var updateResponse = servicesClient.UpdateService(updateServiceRequest);
My question is what format do I use for the update mask?
According to the documentation I should put in:
split {"split": { "allocations": { "newVersion": 1 } } }
But when I try: updateMask.Paths.Add(#"split { ""split"": { ""allocations"": { ""myNewVersion"": 1 } } }");
... I get the exception:
"This operation is only supported on the following field(s): [labels, migration_config, network_settings, split, tag_to_target_map], but got field(s): [split { "split": { "allocations": { "myNewVersion": 1 } } }] from the update request.
Any ideas where I should put the details of the split in the field mask object? The property Paths just seems to be a collection of strings.
The examples for these libraries in Google's doco is pretty poor :-(
I raised a support ticket with Google and despite them suggesting a solution which didn't work exactly (due to trying to assign a string to the UpdateMask which needs a FieldMask object), I managed to use it to find the correct solution.
The code should be:
// appService is a previously retrieved Service object from the ListServices method
var updateServiceRequest = new UpdateServiceRequest();
updateServiceRequest.Name = appService.Name;
updateServiceRequest.UpdateMask = new Google.Protobuf.WellKnownTypes.FieldMask();
updateServiceRequest.UpdateMask.Paths.Add("split");
appService.Split.Allocations.Clear();
appService.Split.Allocations["newServiceVerison"] = 1;
updateServiceRequest.Service = appService;

google cloud online glossary creation returning "empty resource name" error

I am following the EXACT steps indicated here
https://cloud.google.com/translate/docs/glossary#create-glossary
to create a online glossary.
I am getting the following error
madan#cloudshell:~ (focused-pipe-251317)$ ./rungcglossary
{
"error": {
"code": 400,
"message": "Empty resource name.; Resource type: glossary",
"status": "INVALID_ARGUMENT"
}
}
Here is the body of my request.json
{
"languageCodesSet": {
"languageCodes": ["en", "en-GB", "ru", "fr", "pt-BR", "pt-PT", "es"]
},
"inputConfig": {
"gcsSource": {
"inputUri": "gs://focused-pipe-251317-vcm/testgc.csv"
}
}
}
The inputUri path i copied from the google cloud bucket file URI box.
I am not able to understand what the issue is. All I know is something is wrong with the inputUri string.
Please help.
Thanks.
I am a Google Cloud Technical Support Representative and we know that, for the moment, there is an issue with the REST API which is on track. I tried to reproduce your situation and while trying to create the glossary using directly the API I got the same issue as you.
After that, I have tried to create the glossary programmatically using a HTTP Triggered Python Cloud Function and everything went just right. In this manner your API will be called with the Cloud Functions service account.
I will attach the code of my Python Cloud function:
from google.cloud import translate_v3beta1 as translate
def create_glossary(request):
request_json = request.get_json()
client = translate.TranslationServiceClient()
## Set your project name
project_id = 'your-project-id'
## Set your wished glossary-id
glossary_id = 'your-glossary-id'
## Set your location
location = 'your-location' # The location of the glossary
name = client.glossary_path(
project_id,
location,
glossary_id)
language_codes_set = translate.types.Glossary.LanguageCodesSet(
language_codes=['en', 'es'])
## SET YOUR BUCKET URI
gcs_source = translate.types.GcsSource(
input_uri='your-gcs-source-uri')
input_config = translate.types.GlossaryInputConfig(
gcs_source=gcs_source)
glossary = translate.types.Glossary(
name=name,
language_codes_set=language_codes_set,
input_config=input_config)
parent = client.location_path(project_id, location)
operation = client.create_glossary(parent=parent, glossary=glossary)
result = operation.result(timeout=90)
print('Created: {}'.format(result.name))
print('Input Uri: {}'.format(result.input_config.gcs_source.input_uri))
The requirements.txt should include the following dependencies:
google-cloud-translate==1.4.0
google-cloud-storage==1.14.0
Do not forget to modify the code with your parameters
Basically, I have just followed the same tutorial as you, but for Python and I used Cloud Functions. My guess is that you can use App Engine Standard, as well.This may be an issue regarding the service account that are used to call this API. In case this doesn´t work for you let me know and I will try to edit my comment.

Is it safe to access Elasticseach from a client without going through an API server?

For example, suppose you embed the following Javascript code in Vue.js or React.js.
var elasticsearch = require ('elasticsearch');
var esclient = new elasticsearch.Client ({
host: 'Elasticsearch host name of Elascticsearch Cloud's(URL?')
});
esclient.search ({
index: 'your index',
body: {
query: {
match: {message: 'search keyword'}
},
aggs: {
your_states: {
terms: {
field: 'your field',
size: 10
}
}
}
}
}
).then (function (response) {
var hits = response.hits.hits;
}
);
When aiming at a search engine of an application like stackoverflow,
if only GET from the public is OK by using the ROLE setting of the cloud of Elasticseach,
Even though I did not prepare an API server, I thought that the same thing could be realized with the above client side code,
Is it a security problem? (Such as whether it is dangerous for the host name to fall on the client side)
If there is no problem, the search engine response will be faster and the cost of implementation will be reduced,
I wondered why many people would not do it. (Because sample code like this can not be seen on the net much)
Thank you.
It is NOT a good idea.
If any client with a bit of programming knowledge finds our your ElasticSearch IP address, you are screwed, he could basically delete all the data without you even noticing.
I have no understanding about XPack Security, but if you are not using that you are absolutely forced to hide ES behind an API.
Then you also have to secure you ES domain to allow access only from the API server and block the rest of the world.

Admin API to patch minimum instances for Flex environment

Is it possible to use the Admin API to update the minimum number of total instances for a GAE Flex environment?
I've tried both using the client library as well as the web API explorer and I keep getting the 400 response "Frontend automatic scaling should NOT have the following parameter(s): [min_total_instances]"
My update mask is: automaticScaling.min_total_instances
My request body is:
{
"automaticScaling": {
"minTotalInstances": 4
}
I've tried different variants of the update mask and I still get the same error. According to the documentation, this operation should be possible.
This is actually not correctly documented but you need to add the "env": "flex" parameter, since it defaults to standard in the Version's instance in the request body:
{
"automaticScaling": {
"minTotalInstances":
}
"env":"flex"
}
I've raised a documentation update request to make it clearer.

Correct way to check if DocumentDB object exists

In Microsoft examples I saw two ways to check if DocumentDb object like Database, DocumentCollection, Document etc. exists :
First is by creating a query:
Database db = client.CreateDatabaseQuery().Where(x => x.Id == DatabaseId).AsEnumerable().FirstOrDefault();
if (db == null)
{
await client.CreateDatabaseAsync(new Database { Id = DatabaseId });
}
The second one is by using "try catch" block:
try
{
await this.client.ReadDatabaseAsync(UriFactory.CreateDatabaseUri(databaseName));
}
catch (DocumentClientException de)
{
if (de.StatusCode == HttpStatusCode.NotFound)
{
await this.client.CreateDatabaseAsync(new Database { Id = databaseName });
}
else
{
throw;
}
}
What is the correct way to do this procedure in terms of performance?
You should use the new CreateDatabaseIfNotExistsAsync in the DocumentDB SDK instead of both these approaches, if that's what you're trying to do.
In terms of server resources (request units), a ReadDocumentAsync is slightly more lightweight than CreateDatabaseQuery, so you should use that when possible.
I've just seen the try/catch example in one of the Microsoft provided sample project and it got me baffled, as it is plain wrong: you don't use try/catch for control flow.
Never.
This is just bad code. The new SDK provides CreateDatabaseIfNotExistsAsync which I can only hope doesn't just hide this shit. In older lib just use the query approach, unless you want to get shouted at by whoever is going to review the code.

Resources