Salesforce unmanaged package migration - salesforce

My new client has an unmanaged package in their production org. I have access to the package / source code, but I can't modify this instance of the code. I need to move a copy of the package into my development org. I'm attempting to use the Force.com IDE to deploy the package, but I'm getting a series of errors such as the ones below.
I'm new to Salesforce development. Any help would be appreciated
* Deployment Log *
Result: FAILED
Date: August 17, 2014 9:52:04 AM EDT
Deployed From:
Project name: xxx
Username: xxx
Endpoint: www.salesforce.com
Deployed To:
Username: xxx
Endpoint: www.salesforce.com
Deploy Results:
File Name: classes/CreateNewVoucherController.cls
Full Name: CreateNewVoucherController
Action: NO ACTION
Result: FAILED
Problem: sObject type 'Kumo_Vouchers_Group__c' is not supported. If you are attempting to use a custom object, be sure to append the '__c' after the entity name. Please reference your WSDL or the describe call for the appropriate names.
File Name: classes/GatewayAuthorizeNet.cls
Full Name: GatewayAuthorizeNet
Action: NO ACTION
Result: FAILED
Problem: Invalid type: GatewayBase
File Name: classes/GatewayBase.cls
Full Name: GatewayBase
Action: NO ACTION
Result: FAILED
Problem: sObject type 'Order__c' is not supported. If you are attempting to use a custom object, be sure to append the '__c' after the entity name. Please reference your WSDL or the describe call for the appropriate names.

based on provided error description I can assume that you tried to deploy a code, but missed to deploy a sObjects from package.
Try to deploy in a few steps:
deploy all sObjects (each object might has an dependency to another object, thus you have to deploy objects without dependencies). Also objects might have dependencies on VF pages which can't be deployed without apex, which in its turn can't be deployed without object. Thus you can comment out a part of sObjects with relates to VF page and uncomment after VF-page deployment.
deploy all layouts, object translation, custom labels, applications, static resources, workflow rules
deploy apex code
deploy apex triggers & VF pages
deploy all other components.

Related

How to share typescript types between nestjs backend and react frontend?

Let's say I have a /createPost POST endpoint in my nestjs backend. The request and response object should be fully typed in the backend via the DTO. However, how do I bring these types into the frontend? I would like to have a typed post request object, so I can not enter invalid fields or missing fields in the post body. I would also like to have a typescript interface for the response. I want to REUSE the code from the backend. What's the best way to go about this? Is there some sort of type generator library?
Maybe you can create a monorepo using nx workspace https://nx.dev/.
With that you can share models between all apps.
you can use graphql, it can shares types with front
Another thing you could try is to create a Github package that you install (can be private and is free as opposed to doing this on NPM).
Things to remember after you have published the package on your Github organization:
Add a .npmrc file in the root folder of your project and write the following:
#your-organization-name:registry=https://npm.pkg.github.com registry=https://registry.npmjs.org
Remember that you will need to npm login for this to work like this:
npm login --registry=https://npm.pkg.github.com
Username: your github username
Password is a token you create from Github -> Settings -> Developer settings.
I know I havent fully described the flow for you here on how to publish a package to Github, but the general idea should be clear.
Good luck!

salesforce developer - SFDX: Retrieve Source from Org

I have Lightning Web Components called list.
And when I try to do SFDX: Retrieve Source from Org,
To get my classes from Apex class.
I get the following error:
20:32:09.428 Starting SFDX: Retrieve Source from Org
=== Retrieve Warnings
FULL NAME TYPE MESSAGE
───────── ──────────────────────── ──────────────────────────────────────────────────────────────────────
list LightningComponentBundle Entity of type 'LightningComponentBundle' named 'list' cannot be found
20:32:10.344 ended SFDX: Retrieve Source from Org
I need help,
Thanks!
I created a new project with the command:
SFDX: Create Project with Manifest
Then, I right-clicked on the package.xml file in the manifest folder,
and selected:
SFDX: Retrieve Source in Manefist from Org
This worked.
(not a real answer but too long for a comment)
What's the exact command you're running to fetch it. If the command you run doesn't explicitly mention username/alias - check what happens when you open the default org (sfdx force:org:open) - by using default org you might be connecting to wrong org and the component really isn't there.
Maybe "list" is a reserved keyword?
Can you definitely see the component in the org. Setup -> Lightning Components contains it? When you open the developer console, query (tick the "Tooling API" checkbox at the bottom!) and run
SELECT Id, FilePath, Format, Source
FROM LightningComponentResource
ORDER BY FilePath
Is the component in there somewhere? https://developer.salesforce.com/docs/atlas.en-us.api_tooling.meta/api_tooling/tooling_api_objects_lightningcomponentresource.htm
Do you have other tools available like Ant + Migration tool or even Workbench. You should be able to run a full metadata retrieve with package.xml similar to
<?xml version="1.0" encoding="UTF-8"?>
<Package xmlns="http://soap.sforce.com/2006/04/metadata">
<types>
<members>*</members>
<name>LightningComponentBundle</name>
</types>
<version>52.0</version>
</Package>
In your question there might be possible these issues:
1- When you authorize your org with VS code then might be possible your org is different and when you retrieve that is different.
2- When you deploy your code after that can be possible you delete that file, then this error can be shown.
I hope you find the above solution helpful. If it does, please mark it as the Best Answer to help others too.
Thanks and Regards,
Suraj Tripathi

Upload file to s3 browser through batch script

I am trying to upload a json file to S3 browser through batch script using command:
s3browser-con.exe upload <account_name> <local directory\json file> <s3 bucket name and path>
(Referred CLI documentation). However, I get error:
:AccountManager::CurrentAccount::get::failed - unable to show the Add New Account dialog.
This runs fine when I run the batch script individually, however, when I try to run it through command task in Informatica cloud, it gives me this error.
I suspect this is trying to create new account at runtime, but we can only add two accounts at a time since it is free version. Not sure though as I am new to S3 and batch scripts.
Also, is there any way, we can avoid giving account name, as all users might have different account name for a particular bucket? Any help and guidance would be appreciated.
EDIT:
Note: This is detailed error Unhandled Exception:
System.NullReferenceException: Object reference not set to an instance of an object. at mg.b(String aty) at mk.a(String[] avx) at mg.Main(String[] args)
<account_name> is held in the User Profile of whoever set it up in s3browser-con. So if you are not running Informatica (secure agent) on the same machine under the same user then it's not going to work.
However, why are you using a 3rd party tool to upload files to S3 within Informatica? Why not just use Informatica's built-in capabilities? Unless there is a very specific reason for doing this, your solution appears to be over complicated.

Failed to deploy component - "Cannot deserialize the current JSON object ..."

Background Information
TFS 2015 RC2
Release Management Server 2015
Azure VM with 2015 deployment agent
Physical local machine with 2015 deployment agent
Both machines need the drop location using the Through Release Management Server over HTTP(S) option. Currently we are using the HTTP side of things over port 1000.
Workflow
Stop App Pool (Working)
Stop Website (Working)
Copy website directory to backup location (Working)
Backup Database (Working)
Deploy Component (Not Working), using either
xcopy
msdeploy (web deploy package)
The Error (TL;DR)
The same error is received every time, it doesn't matter which machine or which deployment method. The component always fails with a JSON.NET issue.
7/22/2015 3:03:39 PM - Error - (13704, 104) - Cannot deserialize the current JSON object (e.g. {"name":"value"}) into type 'System.String[]' because the type requires a JSON array (e.g. [1,2,3]) to deserialize correctly.
To fix this error either change the JSON to a JSON array (e.g. [1,2,3]) or change the deserialized type so that it is a normal .NET type (e.g. not a primitive type like integer, not a collection type like an array or List<T>) that can be deserialized from a JSON object. JsonObjectAttribute can also be added to the type to force it to deserialize from a JSON object.
Path 'ErrorMessage', line 1, position 16.: \r\n\r\n at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateObject(JsonReader reader, Type objectType, JsonContract contract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerMember, Object existingValue)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.CreateValueInternal(JsonReader reader, Type objectType, JsonContract contract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerMember, Object existingValue)
at Newtonsoft.Json.Serialization.JsonSerializerInternalReader.Deserialize(JsonReader reader, Type objectType, Boolean checkAdditionalContent)
at Newtonsoft.Json.JsonSerializer.DeserializeInternal(JsonReader reader, Type objectType)
at Newtonsoft.Json.JsonConvert.DeserializeObject(String value, Type type, JsonSerializerSettings settings)
at Newtonsoft.Json.JsonConvert.DeserializeObject[T](String value, JsonSerializerSettings settings)
at Microsoft.TeamFoundation.Release.Data.Proxy.RestProxy.BaseDeploymentControllerServiceProxy.GetPackageFileInfos(String packageLocation)
at Microsoft.TeamFoundation.Release.DeploymentAgent.Services.Deployer.HttpPackageDownloader.CopyPackageAndUnpackIt(String packageSourceLocation, String filesDestinationLocation)
at Microsoft.TeamFoundation.Release.DeploymentAgent.Services.Deployer.ComponentProcessor.CopyComponentFiles()
at Microsoft.TeamFoundation.Release.DeploymentAgent.Services.Deployer.ComponentProcessor.DeployComponent()
Update (Workaround)
As a work around if I edit the build configuration to have a UNC path as the drop location, the deployment is successful. However I want to use the Copy build output to server option.
Un-installing deployer and installing RM 2015 RTM deployer should fix this issue.
There was an issue in previous RM releases where NewtonSoft.Json dll was not getting upgraded during deployer auto upgrade.
I dont think MS really tested the agent releases with Update 1.
I got the same error which is actually just a generic error message when using deployment through HTTP. When I converted it to deployment through UNC paths I found out what the problem was.
As you might know with TFS 2015 you had to name the release components exactly as the artifact names. So artifact 'WebApp X' has a release component called 'WebApp X' in RM with the subpath 'WebApp X'.
In my release configuration I have 3 different components (and artifacts).
So on the disk it was:
'\build\WebApp X'
'\build\WebApp Y'
'\build\WebApp Z'
Worked perfectly with 2015 RTM.
Now after Update 1 it looks for the following:
'\build\WebApp X\WebApp X'
'\build\WebApp X\WebApp Y\'
'\build'WebApp X\WebApp Z'
I dont know why it does this and how to solve this yet, but I manually altered the folders in the artifacts drop location and RM picked it up fine. So still looking how to fix this that it works correctly.
This json issue occurs if the user under which the RM server app pool is running doesnt have access to the drop location of the component and you have selected 'Through RM server http(s) option'
So as a fix, you can give the app pool user permissions to access the drop. 
you can see the correct error in the server logs.
"Package location '\share\' does not exists or Application Pool user does not have access"

Alfresco webscript can't find Company Home folder

I'm trying to create a custom webscript on Alfresco Community. I'm following this tutorial: http://docs.alfresco.com/community/concepts/ws-folderListing-intro.html.
Everything works out fine in the beginning. When I navigate to
localhost:8080/alfresco/service/
and click on 'Refresh webscripts', my new webscript is registered.
But now when I navigate to
localhost:8080/alfresco/service/dir/Company%20Home
I get the following message:
The Web Script /alfresco/service/dir/Company Home has responded with a status of 404 - Not Found.
404 Description: Requested resource is not available.
Message: Folder Company Home not found.
Server: Community v4.2.0 (r63893-b12) schema 6.033
Time: 21-mei-2014 17:04:59
Diagnostics: Inspect Web Script (org/example/dir.get)
What is going wrong? I followed all the steps in the tutorial precisely.
Check if you are using a different language then EN in your OS because Alfresco will translate all default folders's names depends in what language are used in your OS in my case am using French so there's no Company Home but there's Espace Racine.
About the listing folders API, there's no reason to do a web script for that because it's already exist by default in Alfresco under the path:
http://localhost:8080/alfresco/service/sample/folder/Company%20Home.
Maybe you hit the bug: Need to know why 'companyhome' scope object not available in Workflow Script API

Resources