Created Custom objects on salesforce not showing up under Leads - salesforce

I have custom object 'Subject_c' with 3 fields and I have created those objects by uploading a CSV file. Subject_c has a lookup relationship with Leads (Its general for the same user regardless of what lead he is viewing). I am able to insert a related list and I can see that the objects are created under Data Management/Storage Usage. But it shows blank under related list.

You're saying that the custom object has lookup to Lead but then you say Subjects are generic and somehow should be displayed on every Lead page? I don't think it'll work.
Stuff appears on related list only when field Subject_c.Lead_c will be populated with "this" Lead's Id. (please note I've made best guess at the field name). So you'd need to insert separate data for each Lead which can quickly blow your storage usage and will be a pain in the a$$ to maintain later. Is it only for displaying? Or do you plan to later capture some kind of survey results for each Lead?
If it's just for display I think you'll need to embed a Visualforce page in the Lead page layout to achieve that in a saner way. The subjects are specific to current viewing user? Or it's more like a general list, just 3 subjects for whole organisation?
P.S. "object" is like a table in normal database. I think you mixed a bit the difference between table and records / rows of data stored in it.

Related

Lookup filter Question -- Tricky One (Lightning)

Its a tricky situation fail to get it. i am a newbie in SF. Here is the scenario --
There is a lookup filter on Product Interest field &
Product Brand is a PickList
what's happening is that out of 6 picklist values for Product Brands only one is causing Lookup functionality to fail i.e Products corresponding to that particular brand do'not show up in Product Interest field.
-Show All does nothing,
-Typing related values (from Workbench) does nothing,
-Adding a new product does not register new product with that particular brand.
-tried comparing WorkBench data with other brands no conclusion still
Any clues y'all might be able to provide. Thank you so much in advance.
Regards,
Sunny
Some screenshots would help to understand your issue...
Data visibility
Are you sysadmin? If not - maybe there are matching rows but sharing rules prevent you from seeing it. Does it show the records in the lookup OK when you change the filter from mandatory to optional?
Data quality
Are you sure there are no typos, that the brand picklist on the record you're creating / modifying is identical to the brand picklist (or whatever it is) on product? Don't look at the label (what's shown in the UI), look at picklist value (what's actually stored in the database). Labels can be translated to other languages but filters and custom code compare values.
Other debugging ideas
Is it a standard create/edit screen or a custom component? If needed - would you know how to run the standard create screen and/or go to Classic UI?
Can you import a record bypassing the UI. Ask admin how to use Data Import Wizard, Data Loader or maybe programmer could help you with piece of code to run in developer console. Something like
MyObject__c o = new MyObject__c(
ProductBrand__c = 'Problematic',
ProductInterest__c = '01t....' // the Id of product you hope to see in the lookup that has same "Problematic" picklist value
);
insert o;
System.debug(o.Id);
Maybe it'll work, maybe it'll throw you an error that will help investigating.
Do you have record types on the object, maybe the problematic picklist value is not legal for the record type you're using and you'd need to allow it first.
Can you reverse the create action. Like go to Product layout editor, add a related list of "MyObjects" (or whatever it is you're inserting) and try to create using the "New" button on the related list. Hopefully that should come with the product preselected in the lookup or show you some error.

What is the best way to manage multiple instances of the same dataset class across multiple instances of the same form?

Apologies for the long winded question - I am experienced with the basics but this is first time working with datasets and databases (previously applications involved records, arrays and text files). My question is one of best practice and what is the best way to implement this, but to answer you will need to know what I am trying to achieve...
I am building an application that allows a telephone operator to take messages, write them into a form and save them to a DB. The main form contains a grid showing the messages already in the DB, and I have a 'message form' which is a VCL form containing edit, combo and checkboxes for all the respective fields the operator must log, some are mandatory, some are optional. There are lots of little helpers and automation's that run depending on the user input, but my question is related to the underlying data capture and storage.
There are two actions the user can perform:
Create new/blank message by clicking a button
Edit message by clicking on respective line in grid
Both of these actions cause an instance of the form to be created and initialized, and in the case of EDIT, the fields are then populated with the respective data from the database. In the case of NEW a blank dataset/record is loaded - read on...
The message form captures four distinct groups of information, so for each one I have defined a specific record structure in a separate unit (lets call them Group1, Group2, Group3 and Group4) each containing different numbers/types of elements. I have then defined a fifth record structure with four elements - each element being one of the previous four defined record structures - this is called TMessageDetails.
Unlike perhaps other applications, I am allowing the user to have up to 6 instances of the message form open at any one time - and each one can be in either NEW or EDIT mode. The only restriction is that two forms in EDIT mode cannot be editing the same message - the main form prevents this.
To manage these forms, I have another record (TFormDetails) with elements such as FormName (each form is given a unique name when created), an instance of TMessageDetails, FormTag and some other bits. I then have an array of TFormDetails with length 6. Each time a form is opened a spare 'slot' in this array is found, a new form created, the TMessageDetails record initialized (or data loaded into it from the DB) and a pointer to this record is given to the form. The form is then opened and it loads all the data from the TMessageDetails record into the respective controls. The pointer is there so that when the controls on the form make changes to the record elements, the original record is edited and I don't end up with a 'local' copy behind the form and out of sync with the original.
For the DB interaction I have four FDQuery components (one per group) on the message form, each pointing to a corresponding table (one per group) in an SQLite DB.
When loading a message I have a procedure that uses FDQuery1 to get a row of data from Table1, and then it copies the data to the Group1 record (field by field) in the respective TMessageDetails record (stored in the TFormDetails array) for that form. The same then happens with FDQuery2, 3, 4 etc...
Saving is basically the same but obviously in reverse.
The reason for the four FDQuery's is so that I can keep each dataset open after loading, which then gives me an open dataset to update and post to the DB when saving. The reason for copying to the records is mainly so that I can reference the respective fields elsewhere in the code with shorter names, and also when the VCL control tries to change a field in the dataset the changes don't 'stick' (the data I try and save back to the DB is the same as what I loaded), whereas in a record they do. The reason for breaking the records down into groups is there are places where the data in one of the groups may need to be copied to somewhere else, but not the whole message. It was also more natural to me to use records than datasets.
So my question is...
Is my use of; the record structures, a global TFormsDetails array, pointers, four FDQuery's per form (so up 6 forms open means up to 24 datasets open), and copying between records and datasets on save/load; a good way to implement what I am trying to achieve?
OR
Is there a way I can replace the records with datasets (making copying from FDQuery easier/shorter surely?) but still store them in an a global 'message form' array so I can keep track of them. Should I also try and reduce the instances of FDQuery and number of potential open datasets by having say one FDQuery component, and re-using it to load the tables into other global datasets etc?
My current implementation works just fine and there is no noticeable lag/hang when saving/loading, I just can't find much info on what is considered best practice for my needs (namely having multiple instances of the same form open - other example refer to ShowModal and only having one dataset to worry about) so I'm not sure if I'm leaving myself open to problems like memory leaks (I understand the 'dangers' of using pointers), performance issues or just general bad practice.
Currently using RAD 10.3 and the latest version of SQLite.
I don't know if what I'll say is the "best" practice but it is how I would do it.
The form would have his own TFDQuery using a global FDConnection. When several instance of the form are created, you have a FDQuery for each instance.
You add an 'Execute' method in the form. That method will create the FDQuery, get all data from the dataset with one or more queries, populate the form and show the form. Execute method receive argument (or a form's property) such as primary key to be able to get the data. If the argument is empty, then it is for a new record.
If the form has to update the grid on the fly, then an event will be used. The main form (containing the grid) will install an event handler and update the grid accordingly to the data given by the form.
When the form is done, it will use the primary key to store data back to the database.

Referencing previously defined items in JSON-LD

I'm trying to wrap my head around defining JSON-LD correctly for my website. The bit I'm not sure about is how to reference previously defined JSON-LD items without having to copy and paste them.
I know that each item can be given an #id property, but how should I correctly utilize it (if I even can)?
For example, suppose I create an Organization item with an #id of https://example.com/#Organization.
When I need to reference that item again, is it correct to simply specify that #id again, nothing more?
Also am I correct in assuming that I can do this even if the item isn't defined on the page that I'm referencing it?
In the case of the Organization item type, my understanding is that you should only declare it on the home page, rather than every page, so if the user is currently on the product page, and I want to reference the organization, it isn't already defined on the page I'm on, but has been declared elsewhere.
You're correct that using the same #id in different places allows you to make statements about the same thing. In fact, the JSON-LD Flattening algorithm, which is used as part of Framing, consolidates these all together in a single node object.
JSON-LD is a format for Linked Data, and it is reasonable to say that statements made about the same resource on different locations (pages) can be merged together, and if you form a Knowledge Graph from information across multiple locations, this is effectively what you're doing. A Knowledge Graph will typically reduce the JSON-LD (or other equivalent syntactic representation) to RDF Triples/Quads, where each "page" effectively defines a graph, which can be combined to create a larger Dataset. You can then query the dataset in different ways to retrieve that information, which can result in the separate statements being consolidated.
Most applications, however, will likely look for a complete definition of a resource in a single location. But for something like Organization, you could imaging that different Employee resources might be made, where there is a relation such as :Employee :worksFor :Organization, so that the page for an Organization would not expect to also list every employee in that organization, but a more comprehensive Knowledge Graph made from the merge of all of those separate resources could be used to reconstruct it.

Bi-Directional Relationships in Backendless - Working with highly interrelated data

This is a fundamental novice level question that will not be short. This is specific to Backendless.
I have a number of scenarios I would like to be able to address, as I am working with a small set of tables that are all interrelated in some form and need to be explored from various directions.
A basic example would be something like PersonTable and AddressTable. PersonTable containing a list of people, with their lastName, firstName, etc. AddressTable containing addresses and their various attributes of streetName, houseNumber, etc.
Let's say I want to provide users two distinct views in a main navigation and allow them to drill down further.
View1: You click "People", you get a list of people from the PersonTable. This list appears in a secondary navigation window. Clicking an individual person will provide you the address/addresses associated with that person.
However, I also want to be able to do this in reverse:
View2: You click "Address", you get a list of addresses from the AddressTable. This list appears in a secondary navigation window. Clicking an individual address will provide you with a person/people associated with that address.
So from a uni-directional approach, there would be a relationship from PeopleTable to AddressTable. This is perfectly well and good for View 1. One query will provide the data for the secondary navigation and the results from that query can include the relationship data needed for the drill down.
However, if I wanted to support View 2, I would have to perform two queries given the direction of the relationship and where I am starting.
If you scale this to a larger set of data with more tables and fields, my concern might become more apparent. Because I want to actually provide some data from the parent of the relationship in the initial secondary navigation item creation. So that means an initial query of that table to list the items, and a query for each individual item (to obtain the data I need from it's parent in the relationship) to complete the data shown in the initial list. (Then clicking an item would provide even more detail). Obviously this relationship can be reversed, and I would then be pulling child data and not parent data, but then when I want to come at the data from the other direction (the other View) I am in the same situation again.
TL;DR: I need to be able to traverse tables in pretty much any direction and drill into data while attempting to minimize the number of queries required to do so for any given case. Is this a case where a large number of relationships is warranted?
Getting to the root of the question: My understanding is that, while Backendless does support them, bi-directional relationships are generally frowned upon (at least in the SQL world).
So, really, what is best practice? Is it simply a logical "Create relationships when they help you reduce queries"?
Bidirectional is frowned upon here too, though it does work. You may find a few bugs as it isn't used much.
The reason is that it isn't required, you already know you can make a request to get the inverse content.
But, the reason you should not use them is that auto-loading all of that extra data when you might not use it is more costly than making explicit requests when you do...
Also, you can limit your query impact in terms of network traffic by creating a custom service which does all the leg work.
However, if I wanted to support View 2, I would have to perform two
queries given the direction of the relationship and where I am
starting.
Performing two queries is not necessarily in Backendless, as the query syntax supports "backward lookup". It means knowing a "child" object, you can lookup its parent using the following syntax of the "whereClause":
childRelation.objectId = 'childObjectId'
For example for your Person and Address tables, suppose the relation column in the Parent table is called "addresses" and it is a one-to-many relation. Then the query sent to the Person table is:
addresses.objectId = 'specific-objectId-value-from-Address'
Keep in mind that you can test your whereClause queries using Backendless console. Here's an article about that feature:
https://backendless.com/feature-14-sql-based-search-for-data-objects-using-console/
Hope this helps.

Create new table for every Content $type in CakePHP

Description of Goal
Trying to make a CMS (for internal use, but on many sites) with CakePHP.
I'd like to be able to have a generic Content model, then have many different user-generated content types.
This would be simple to do, but my single contents table would eventually become massive if every piece of content on the site was in it. I'd like to split it into tables to help query times/site speed...etc.
My thought (not sure if possible) would be to somehow tell CakePHP that if the type field of the Content is "article", that it should use the content_articles table...etc These tables would be generated afterSave (I suppose) when creating a new content_type.
Would be nice to give them options of which fields the specific content-type would use - even manage this by adding/removing fields...etc, then only generate those fields in the table, and somehow do validation on them based on the content_fields table data.
//my thoughts on tables:
content_types //id, name, description, use_table
content_fields //id, name, content_type_id, required, field_type, max_chars
content_articles //generated by code
content_people //generated by code
Questions:
Is it even possible? Are there better ways to go about this?
Perhapse use a key value table for content rather than a standard table? The utils plugin from CakeDC can do just that with a supported RDBMS.
Or, you could set this model to use a key value data source like MongoDB, which is a great use case for using NoSQL. I'd probably take that approach if you are talking about massive key value stores and a changing schema. There's a plugin for MongoDb on github.

Resources