If you like to use the S3 with the popular Cyberduck app, from Swisscom Application Cloud you have to use a custom connection profile with AWS2.
You can find this profile here for download
Authentication with signature version AWS2
Incomplete list of known providers that require the use of AWS2
Riak Cloud Storage
EMC Elastic Cloud Storage
Thank you very much for sharing this nice tool tip. I added here a few screenshots for clarification.
1) brew cask install cyberduck
2) Download linked S3 AWS2 Signature Version (HTTPS).cyberduckprofile File and open it with Cyberduck.
3) copy credentials and host from cf env or create service keys.
System-Provided:
{
"VCAP_SERVICES": {
"dynstrg": [
{
"credentials": {
"accessHost": "ds31s3.swisscom.com",
"accessKey": "24324234234243456546/CF_P8_FFGTUZ_TGGLJS_JFG_B347EEACE",
"sharedSecret": "sfdklaslkfklsdfklmsklmdfklsd"
},
"label": "dynstrg",
"name": "cyberduck-testing",
"plan": "usage",
"provider": null,
"syslog_drain_url": null,
"tags": [],
"volume_mounts": []
}
],
sharedSecret is named "Secret Access Key" in Cyberduck
create initial bucket (it's called Folder in Cyberdurck)
upload with Drag and Drop some files
Some commandline alternatives (Open Source) what some people use with Swisscom's EMC Atmos (dynstrg Service) are
S3cmd
S3cmd is a free command line tool and client for uploading, retrieving
and managing data in Amazon S3 and other cloud storage service
providers that use the S3 protocol, such as Google Cloud Storage or
DreamHost DreamObjects. It is best suited for power users who are
familiar with command line programs. It is also ideal for batch
scripts and automated backup to S3, triggered from cron, etc.
Minio Client
Minio Client is a replacement for ls, cp, mkdir, diff and rsync
commands for filesystems and object storage.
Related
My google project ID is "ulapph-public-1" and its been working fine since 2015. But yesterday when I redeployed the project, it says:
ERROR: (gcloud.app.deploy) Error Response: [7] Insufficient permissions to create Google Cloud Storage bucket.
The link to the Google Cloud Storage is no longer accessible as well:
https://console.cloud.google.com/storage/browser/ulapph-public-1.appspot.com?project=ulapph-public-1
Then the Google Cloud Storage browser says this error:
A required resource is not available.
Tracking Number: c4898802499062169
I tried checking the IAM but no luck, the deploy won't proceed. I even tried going to Appengine->Settings->Default Cloud Storage Bucket then the button I see there is "Create" but when I click it the error says "Default Cloud Storage Bucket creation failed".
I also checked the cloud storage browser settings and I noticed that there is no associated service account for the cloud storage. Compared to my other project IDs, I usually see here the service account like:
service-123456789012#gs-project-accounts.iam.gserviceaccount.com
Cloud Storage Service Account
Each project has an associated Cloud Storage service account. This is used to perform certain background actions: receiving PubSub notifications and encrypting/decrypting KMS encrypted objects.
Is there anyone who is familiar with this issue?
- How do I manually add the missing service account service-123456789012#gs-project-accounts.iam.gserviceaccount.com?
- What could have triggered the removal of the service account?
File "/google/google-cloud-sdk/lib/googlecloudsdk/api_lib/util/waiter.py", line 320, in _IsNotDone
return not poller.IsDone(operation)
File "/google/google-cloud-sdk/lib/googlecloudsdk/api_lib/app/operations_util.py", line 182, in IsDone
encoding.MessageToPyValue(operation.error)))
OperationError: Error Response: [7] Insufficient permissions to create Google Cloud Storage bucket.
Details: [
[
{
"#type": "type.googleapis.com/google.rpc.ResourceInfo",
"resourceName": "staging.ulapph-public-1.appspot.com",
"resourceType": "cloud storage bucket"
}
]
]
ERROR: (gcloud.app.deploy) Error Response: [7] Insufficient permissions to create Google Cloud Storage bucket.
Details: [
[
{
"#type": "type.googleapis.com/google.rpc.ResourceInfo",
"resourceName": "staging.ulapph-public-1.appspot.com",
"resourceType": "cloud storage bucket"
}
]
]
Note that all my other google projects are working fine and the billing is fine but I only have Bronze support package so I can't connect to a technical support.
My GAE (Standard) app is hosted in europe-west region.
I am looking into creating a Cloud SQL instance in support of this app and would like to place it closest to the GAE.
Current Cloud SQL instances are available at below locations:
europe-west1 Belgium
europe-west2 London
europe-west3 Frankfurt
Is there any way to find out additional location details of my GAE app in order to decide which Cloud SQL location to use?
You can use gcloud app describe from the cloud shell in order to find out your app's location info. Typing this command (gcloud app describe) in the shell returns something like this:
authDomain: gmail.com
codeBucket: staging.my-project-id.appspot.com
defaultBucket: my-project-id.appspot.com
defaultHostname: my-project-id.appspot.com
featureSettings:
splitHealthChecks: true
gcrDomain: eu.gcr.io
id: my-project-id
locationId: europe-west2
name: apps/my-project-id
servingStatus: SERVING
See the command description here.
Next you can create your SQL instance by typing gcloud sql instances create [your-instance-name] --region=[region-of-your-choice]. For example:
user-id#my-project-id:~$ gcloud sql instances create test-instance --region=europe-west2
Creating Cloud SQL instance...done.
Created [https://www.googleapis.com/sql/v1beta4/projects/my-project-id/instances/test-instance].
NAME DATABASE_VERSION REGION TIER ADDRESS STATUS
test-instance MYSQL_5_6 europe-west2 db-n1-standard-1 00.000.000.000 RUNNABLE
user-id#my-project-id:~$
All the available options are here.
A core requirement of my application is the ability to automatically deploy ArangoDB with all collections, graphs, data, and APIs. The HTTP API and the various wrappers have been sufficient for this so far, but I haven't been able to find an API for deploying Foxx services. Is there any way to create and deploy a Foxx service via RESTful API or through one of the wrappers? So far, the only way I know to create a Foxx service is through the web interface.
I found this question which leads me to believe it's possible, but I don't know how to specify the Git location of the Foxx service. Could you provide instructions for creating a Foxx service without the web UI and list the possible parameters?
To install a Foxx service via the REST API, you can use the endpoint HTTP PUT /_admin/foxx/install.
It will require a JSON body to be sent, with attributes named mount and appInfo. mount needs to contain the mountpoint (needs to start with a forward slash). appInfo is the application to be mounted. It can contain the filename as previously returned by the server from the call to /_api/upload, e.g.
{
"appInfo" : "uploads/tmp-30573-2010894858",
"mount" : "/my-mount-point"
}
install from remote URL
You can also install a Foxx service from a zip file available via HTTP(S) from an external server. You can include the username and password for HTTP Basic Auth as necessary:
{
"appInfo" : "https://user:password#example.com/my-service.zip",
"mount" : "/my-mount-point"
}
install from GitHub
You can also install a Foxx service from a GitHub repository, if the repository is public accessible, e.g.
{
"appInfo" : "git:arangodb-foxx/demo-hello-foxx:master",
"mount" : "/my-mount-point"
}
Behind the scenes, ArangoDB will translate the request into a regular URL for the zip bundle GitHub provides.
install from local file system
You can also install a Foxx service from a zip file or directory on the local filesystem:
{
"appInfo" : "/path/to/foxx-service.zip",
"mount" : "/my-mount-point"
}
This also works with directory, but ArangoDB will create a temporary zip file for you in this case.
My app stores a bunch of images as blobs. This is roughly how I store images.
from google.appengine.api import files
# ...
fname = files.blobstore.create(mime_type='image/jpeg')
with files.open(fname, 'a') as f:
f.write(image_byte)
files.finalize(fname)
blob_key = files.blobstore.get_blob_key(fname)
To serve these images, I use images.get_serving_url(blob_key).
Here are my questions:
Will I have to copy over all blobs to Google Cloud Storage? In other words, will I be able to access my existing blobs using GCS client library and existing blob keys? Or, will I have to copy the blobs over to GCS and get new blob keys?
Assuming I do have to copy them over to GCS, what is the easiest way? Is there a migration tool or something? Failing that, is there some sample code I can copy-paste?
Thanks!
The files have all been going into GCS for a while. The blobstore is just an alternate way to access it. The blob keys and access shouldn't be affected.
You will, however, need to stop using the files API itself and start using the GCS API to create the files.
1) No, you can still use the blobstore. You can also upload files to the blobstore when you use the BlobstoreUploadHandler.
2) Migration is easy when you use the blobstore, bacause you can create a blobkey for GCS objects. And when you use the default GCS bucket you have free quota.
from google.appengine.api import app_identity
import cloudstorage as gcs
default_bucket = app_identity.get_default_gcs_bucket_name()
gcs_filename = '/%s/%s' % (default_bucket, image_file_name)
with gcs.open(gcs_filename, 'w', content_type='image/jpeg') as f:
f.write(image_byte)
blob_key = blobstore.create_gs_key('/gs' + gcs_filename)
# and create a serving url
I received an email from Google Cloud Platform on May 19, 2015, an excerpt is shown here:
The removal of the Files API will happen in the following manner.
On May 20th, 2015 no new applications will have access to the Files
API. Applications that were created prior to May 20th, 2015 will
continue to run without any issues. That said, we strongly encourage
developers to start switching over to the Cloud Storage Client Library
today.
On July 28th, 2015 starting at 12pm Pacific Time, the Files API will
be temporarily shutdown for 24 hrs.
On August 4th, 2015, we will permanently shut down the Files API at
12:00pm Pacific time.
Since I was using the exact same code to write a blobstore file, I spent a day researching the GCS system. After failing to get a "service account" to work (by going through poorly documented OAuth2 confusion), I gave up on using GCS.
Now I am using ndb's BlobProperty. I keep the blobs in a separate model using both a parent key and a key name (as filename) to locate the images. Using a separate model keeps the huge blob out of my regular entities so fetches aren't slowed down by their sheer size. I wrote a separate REST API just for the images.
Me too faced same issue while running GAE server locally:
com.google.appengine.tools.cloudstorage.NonRetriableException: com.google.apphosting.api.ApiProxy$FeatureNotEnabledException: The Files API is disabled. Further information: https://cloud.google.com/appengine/docs/deprecations/files_api
Here in my case this is fixed my issue:
Simply I changed
This:
compile 'com.google.appengine.tools:appengine-gcs-client:0.4.1'
To:
compile 'com.google.appengine.tools:appengine-gcs-client:0.5'
in build.gradle file, because Files API(Beta) is deprecaated on June 12, 2013 and Turndowned on September 9, 2015. (Source)
From this MVN Repo latest one is 'com.google.appengine.tools:appengine-gcs-client:0.5'
In SalesForce you can schedule up to weekly "backups"/dumps of your data here: Setup > Administration Setup > Data Management > Data Export
If you have a large Salesforce database there can be a significant number of files to be downloading by hand.
Does anyone have a best practice, tool, batch file, or trick to automate this process or make it a little less manual?
Last time I checked, there was no way to access the backup file status (or actual files) over the API. I suspect they have made this process difficult to automate by design.
I use the Salesforce scheduler to prepare the files on a weekly basis, then I have a scheduled task that runs on a local server which downloads the files. Assuming you have the ability to automate/script some web requests, here are some steps you can use to download the files:
Get an active salesforce session ID/token
enterprise API - login() SOAP method
Get your organization ID ("org ID")
Setup > Company Profile > Company Information OR
use the enterprise API getUserInfo() SOAP call to retrieve your org ID
Send an HTTP GET request to https://{your sf.com instance}.salesforce.com/ui/setup/export/DataExportPage/d?setupid=DataManagementExport
Set the request cookie as follows:
oid={your org ID}; sid={your
session ID};
Parse the resulting HTML for instances of <a href="/servlet/servlet.OrgExport?fileName=
(The filename begins after fileName=)
Plug the file names into this URL to download (and save):
https://{your sf.com instance}.salesforce.com/servlet/servlet.OrgExport?fileName={filename}
Use the same cookie as in step 3 when downloading the files
This is by no means a best practice, but it gets the job done. It should go without saying that if they change the layout of the page in question, this probably won't work any more. Hope this helps.
A script to download the SalesForce backup files is available at https://github.com/carojkov/salesforce-export-downloader/
It's written in Ruby and can be run on any platform. Supplied configuration file provides fields for your username, password and download location.
With little configuration you can get your downloads going. The script sends email notifications on completion or failure.
It's simple enough to figure out the sequence of steps needed to write your own program if Ruby solution does not work for you.
I'm Naomi, CMO and co-founder of cloudHQ, so I feel like this is a question I should probably answer. :-)
cloudHQ is a SaaS service that syncs your cloud. In your case, you'd never need to upload your reports as a data export from Salesforce, but you'll just always have them backed up in a folder labeled "Salesforce Reports" in whichever service you synchronized Salesforce with like: Dropbox, Google Drive, Box, Egnyte, Sharepoint, etc.
The service is not free, but there's a free 15 day trial. To date, there's no other service that actually syncs your Salesforce reports with other cloud storage companies in real-time.
Here's where you can try it out: https://cloudhq.net/salesforce
I hope this helps you!
Cheers,
Naomi
Be careful that you know what you're getting in the back-up file. The backup is a zip of 65 different CSV files. It's raw data, outside of the Salesforce UI cannot be used very easily.
Our company makes the free DataExportConsole command line tool to fully automate the process. You do the following:
Automate the weekly Data Export with the Salesforce scheduler
Use the Windows Task Scheduler to run the FuseIT.SFDC.DataExportConsole.exe file with the right parameters.
I recently wrote a small PHP utility that uses the Bulk API to download a copy of sObjects you define via a json config file.
It's pretty basic but can easily be expanded to suit your needs.
Force.com Replicator on github.
Adding a Python3.6 solution. Should work (I haven't tested it though). Make sure the packages (requests, BeautifulSoup and simple_salesforce) are installed.
import os
import zipfile
import requests
import subprocess
from datetime import datetime
from bs4 import BeautifulSoup as BS
from simple_salesforce import Salesforce
def login_to_salesforce():
sf = Salesforce(
username=os.environ.get('SALESFORCE_USERNAME'),
password=os.environ.get('SALESFORCE_PASSWORD'),
security_token=os.environ.get('SALESFORCE_SECURITY_TOKEN')
)
return sf
org_id = "SALESFORCE_ORG_ID" # canbe found in salesforce-> company profile
export_page_url = "https://XXXX.my.salesforce.com/ui/setup/export/DataExportPage/d?setupid=DataManagementExport"
sf = login_to_salesforce()
cookie = {'oid': org_id, 'sid':sf.session_id}
export_page = requests.get(export_page_url, cookies=cookie)
export_page = export_page.content.decode()
links = []
parsed_page = BS(export_page)
_path_to_exports = "/servlet/servlet.OrgExport?fileName="
for link in parsed_page.findAll('a'):
href = link.get('href')
if href is not None:
if href.startswith(_path_to_exports):
links.append(href)
print(links)
if len(links) == 0:
print("No export files found")
exit(0)
today = datetime.today().strftime("%Y_%m_%d")
download_location = os.path.join(".", "tmp", today)
os.makedirs(download_location, exist_ok=True)
baseurl = "https://zageno.my.salesforce.com"
for link in links:
filename = baseurl + link
downloadfile = requests.get(filename, cookies=cookie, stream=True) # make stream=True if RAM consumption is high
with open(os.path.join(download_location, downloadfile.headers['Content-Disposition'].split("filename=")[1]), 'wb') as f:
for chunk in downloadfile.iter_content(chunk_size=100*1024*1024): # 50Mbs ??
if chunk:
f.write(chunk)
I have added a feature in my app to automatically backup the weekly/monthly csv files to S3 bucket, https://app.salesforce-compare.com/
Create a connection provider (currently only AWS S3 is supported) and link it to a SF connection (needs to be created as well).
On the main page you can monitor the progress of the scheduled job and access the files in the bucket
More info: https://salesforce-compare.com/release-notes/