actually, i want to collect informations from different sources of data (csv files, databases...), if we don't find the information in one of the sources we will search in an other one,
I tried to find a design pattern to solve this problem, Is there some well known design pattern to make us of?
I think you can use adapter pattern.
Target - defines the domain-specific interface that Client uses.
Client - collaborates with objects conforming to the Target interface.
Adaptee - defines an existing interface that needs adapting.
Adapter - adapts the interface of Adaptec to the Target interface.
Related
So as I don't get any help by reading documentations and blogposts I ll ask over here:
I want to deploy a Sagemaker Endpoint with fitting a Sagemaker Pipeline. I want to have an endpoint which is backed by a PipelineModel. This PipelineModel should consist of two models: A fitted model which encodes my data and a model which predicts with an XGBoost estimator. I follow along this docu: enter link description here
But this example doesn't show how to integrate the fitted preprocessor model in a PipelineStep. What Step do I have to use? A TrainingStep? Thanks in advance. I am desperate
Check out this official example: Train register and deploy a pipeline model.
The two variations to keep in mind:
For models that need training (usually for those based on tensorflow/pytorch), a TrainingStep must be used so that the output (the model artifact) is correctly (and automatically) generated with the ability to use it later for inference.
For models generated by a simple fitting on the data (e.g., a scaler with sklearn), you can think about creating a TrainingStep in disguised (it is an extra component in pipeline, it is not very correct to do it but it is a working round) but the more correct method is to configure the preprocessing script so that it internally saves a model.tar.gz file with the necessary files (e.g., pickle or joblib objects) inside it can then be properly used in later steps as model_data. In fact, if you have a model.tar.gz, you can define a Model of various types (e.g., an SKLearnModel) that is already fitted.
At this point, you define your PipelineModel with the trained/fitted models and can either proceed to direct endpoint deployment or decide to go through the model registry and keep a more robust approach.
Im starting a new project. The aim of the project is to create a e-authoring tool for building courses in SCORM Complaint. Im new to this domain and I have little idea on this. I have taken a view on authoring tool in Articulate, which my customer requires to do the same. I understood the content creation, but I am trying to understand How can I export this as SCORM compliant course? In between I learned about xAPI as well And understood it is a kind of enhanced SCORM.
Could any one guide me to understand this,
1) How can create content from my custom authoring tool and export as SCORM complaint
2) Is it better to use xAPI or SCORM.
3) How is the SCORM pacakge communicate with my custom made LMS?
4) Heard about LRS,
My custom authoring tool will be made in React and store would MondDB
Any help would be greatly appreciated. Thankyou!
That is a lot to take on, particularly all at once.
1) The SCORM spec is made up of multiple parts. There is a packaging portion and a runtime portion. The basics are that your package needs to be a zip file, and that zip needs to include specific files that indicate to the LMS what type of standard it is along with other metadata about the package. For SCORM this will be called an imsmanifest.xml file. For xAPI you are most likely going to use a cmi5.xml (see cmi5) or a tincan.xml file (what Articulate Storyline exports when it says "xAPI"). The other parts of the package will depend on what standard and version of that standard (for SCORM 1.2, 2004 2nd, 3rd, or 4th edition) you are targeting, realizing that different LMSs support different standards and different degrees of those standards.
Once you have a package constructed that will import, the content itself (usually an HTML file) will need to locate the JavaScript API provided by the SCORM player (from the LMS) and make specific calls depending on what the content is needing to store or read, this is the runtime portion. The calls will again depend on the standard and version. For xAPI based packages (either tincan.xml packages or cmi5 packages) the content will communicate directly to the LRS based on the information provided on the URL at launch time (there is no built in JavaScript API).
2) This entirely depends on what your customer base looks like and the types of data that you intend to capture. SCORM is a more mature landscape and has wider adoption and is more heavily specified, if the information you need to capture fits into its limited information model then it is still an excellent choice. If you need significant data portability and/or the information you need to capture goes beyond compliance data (pass/fail, complete, and score) and/or interaction data (questions + answers) then you should consider xAPI, specifically via cmi5.
3) The LMS must provide a JavaScript API (specified by the SCORM runtime) which the content will use as its interface. The storage/retrieval of data is implementation specific for the LMS beyond what is included in the specification for the JavaScript API.
4) You didn't really include a question here.
I would suggest familiarizing yourself with the two sets of standards via http://scorm.com and http://xapi.com. And although it is a plug for my company's product, you may want to consider the Rustici Driver as it is a product (library) specifically designed to make it easy for an authoring tool to export content as SCORM 1.2, 2004, AICC, cmi5 or Tin Can (the latter two being xAPI). Once you have your tool up and running with minimal standards support you should consider testing it on Rustici's SCORM Cloud (it is free for this purpose), see http://cloud.scorm.com.
The format is huge, there is no quick reference guides. And different authoring tools have different scorm-support depths. You should probably start with this document
Sounds like you're talking about designing editable content; and the content "framework" itself.
This is a massive effort! This is massive support! That said, people do it.
Having built a CMS system for many supporting subject matters I had to divide and conquer this task.
Few ways I'd think to digest this beast- data, data, data
Requirements on Activities (Interaction types)
Design (static/dynamic) on these interactions
The view/facade displaying can change. Tech moves at the speed of light. Need to come up with a super solid data model.
I'd think about how these can be generic, and how they can be extended to meet the customers goals/needs. All depends how much customization (if any) can happen.
I start mapping all this to SCORM CMI Object level calls. Scoring, Progress, Interactions, Objectives etc...
Get your self a wicked SCORM Content API library or write one yourself. You'll be re-using a lot of these calls, no sense baking them into all your interactions
Get up on SCORM Packaging .. much of this has to be defined at author time. Lots of reading, and a lot of features you need to pick thru if your customers even use. Don't dev in places that have .1% market need. The low hanging fruit get you to market.
Surround yourself with passionate great people. You'll need them.
As far as the standards go, it's all about portability. SCORM works directly with a LMS if thats where your customer goes. Others use a LRS which is coded to work with one they set at author time. You can even do both.
Aside from React and MongoDB, you'll need something that can do the lift and shift of all this content.
I need to create an ASN.1 BER encoded file with multiple records. I've been searching for one (tools like oss, asn1c, ... etc), but I can't find one that suits me with a full example on how multiple records can be encoded in one file.
Does anyone know a good tool?
Thanks
The tools won't really help you design your file-format, or protocol; that is a manual task that you must perform. You will need to design the rules of how data is stored and in what form each element will take.
The tools will help with implementation, allowing you to take your protocol definition and generating C or C++ code that is capable of decoding and encoding files that conform to that protocol.
The company I work for uses OSS Nokalva, which is the best, but expensive. I have also used asn1c, for personal projects, with success.
You can use asn1c and define multiple records with
MultipleRecords :: SEQUENCE OF SingleRecord
I'm new to .Net and trying to learn things. I'm trying to develop a Prism4 WPF app with
Visual Studio CSharp 2010 Express Edition,
Prism v4,
Unity as IoC,
SQL Server CE as data store.
I've studied a lot(?) and infuenced by this and this among others, and decided to implement MVVM, Repository and UnitofWork patterns. This application will be a Desktop application with a single user (me:-)
So, I've created a solution with the following projects:
Shell (Application Layout and startup logic)
Common (Application Infrastructure and Workflow Logic)
BusinessModuleA (Views and ViewModels)
BusinessModuleA.Model (Business Entities - POCO)
BusinessModuleA.Data (Repositories, Data Access (EF?) )
BusinessModuleB (Views and ViewModels)
BusinessModuleB.Model (Business Entities - POCO)
BusinessModuleB.Data (Repositories, Data Access (EF?) )
My questions are:
Which projects should reference which projects ?
If I implement Repositories in 'BusinessModuleX.Data', which is
obvious, where should I define IRepositories ?
Where should I define IUnitOfWork and where should I implement UnitOfWork ?
Is it ok if I consume UnitOfWork and Repositories in my ViewModels ?
Instict says it is bad design.
If (4) above is bad, then ViewModel should get data via a Service
Layer (another project ?). Then, how can we track changes to the
entities so as to call the relevant CRUD methods on those objects at the Service Layer?
Is any of this making any sense or am I missing the big picture ?
Ok, may be I've not made myself clear on what I wanted exactly in my first post. There are not many answers coming up. I'm still looking for answers because while what #Rachel suggested may be effective for the immediate requirements, I want to be careful not to paint myself into a corner. I've an Access Db that I developed for my personal use at Office, and which became kind of a success and now being used by 50+ users and growing. Maintaining and modifying the access code base has been fairly simple at the beginning, but as the app evolved, began to fall apart. That's why I have chosen to re-write everything in .Net/Wpf/Prism and want to make sure that I get the basic design right.
Please discuss.
Meanwhile, I came up with this...
First off, I would simplify your project list a bit to just Shell, Common, ModuleA, and ModuleB. Inside each Project I'd have sub-folders to specify where everything is. For example, ModuleA might be separated into folders for Views, ViewModels, and Models
I would put all interfaces and global shared objects such as IUnitOfWork in your Common project, since it will be used by all modules.
How you implement IUnitOfWork and your Repositories probably depends on what your Modules are.
If your entire application links to one database, or shares database objects, then I would probably create two more projects for the DataAccessLayer. One would contain public interfaces/classes that can be used by your modules, and the other would contain the actual implementation of the Data Access Layer, such as Entity Framework.
If each Module has it's own database, or its own set of objects in the database (ie. Customer objects don't exist unless you have the Customer Module installed), then I would implement IUnitOfWork in the modules and have them handle their own data access. I would probably still have some generic interfaces in the Common library for the modules to build from though.
Ideally, all your modules and your Shell can access the Common library. Modules should not access each other unless they build on them. For example, a Customer Statistics module that builds on the base Customer module should access the Customer module.
As for if your ViewModels should consume a UnitOfWork orRepository, I would have them use a Repository only. Ideally your Repository should be like a black box - ViewModels can Get/Save data using the Repository, but should have no idea how it's implemented. Repositories can get the data from a service, entity framework, direct data access, or wherever, and the ViewModel won't care.
I'm no expert on design architecture, however that's how I'd build it :)
I would highly recommend you to get the Introduction to PRISM and Repository pattern inside Design Patterns Library training videos. They are great ones. Hope it helps
Anyone aware of guidance on creating a custom SSIS connection manager? I want to abstract the complexities of a source system for the folks that need to extract data out of it using SSIS.
The MSDN tutorial is probably not a bad place to start. I haven't tried using their examples to implement a custom connection manager, but I was able to follow their documentation on custom data flow components to create a few of those without too much fuss in the past, so hopefully the connection manager examples are on the same level. I also found this example, which is probably a little more extensive code-wise since it was actually developed for use.
Keep in mind (as they mention on the MSDN page) that custom connection managers don't always play nice with the built-in components, so you often have to create custom data source components as well. Information about developing those seems to be more common at least, probably because actually parsing the source data tends to be a more varied task than setting up the connection.
Is it a one-off package, or intended to be reusable? If the latter, you could simply use a script as a source component (per this example). If it needs limited re-use, copy/paste may be good enough; if not then perhaps a custom component (example here) would do the trick.