Skip to content

Adding SCORM packages to Open edX via SCORMCloud and LTI

February 8, 2015

Do you want to be able to use existing training content in an Open edX course? SCORM – the “Sharable Content Object Reference Model” – is a set of interoperability standards for e-learning software that governs how online learning content and Learning Management Systems (LMSs) communicate with each other. It allows units of online training material to be shared across e-learning systems.

Open edX doesn’t support SCORM natively yet, but here at Jazkarta we’ve successfully integrated SCORM Cloud and Open edX. SCORM Cloud is a third party service that wraps e-learning content in a SCORM dispatch package that can be embedded into an LMS. The gist of this integration is as follows:

  1. Export a learning module from your e-Learning authoring tool as a SCORM package.
  2. Upload SCORM package to SCORM Cloud.
  3. Configure SCORM Cloud dispatch to serve them up as LTI (Learning Tools Interoperability) components.
  4. Use the LTI component in Open edX to add these packages, which are actually being hosted on SCORM Cloud.

The user doesn’t need to login again as this is transmitted via LTI, and the activity is passed back into edX for grading purposes.

Here is a detailed how-to for people who would like to duplicate our work. I found some sample courses in this blog article and used the “Picture Perfect_Simulation Sample” course. Open that course in Adobe Captivate and follow these instructions for prepping, uploading, and using this sample in Open edX. (You can download Adobe Captivate 8 here – a free 30 day trial is available by clicking the ‘Try’ link on that URL.)

Mark interactive elements as graded and assign points

wpid898-media_1423439432060.png

For each interactive element that you want graded, you need to click on the element, click on the Properties icon (in the upper right corner of the screen), mark it as graded in the properties panel, assign a point value that will count towards the grade, and check the Report Answers checkbox.

Set project end action to close project

wpid895-media_1423432715374.png

In Publish Settings -> Project -> Start and End, set the Project End Action to “Close project” (I think it defaults to Stop project). That way the window showing the assessment will automatically close when it’s done playing.

wpid896-media_1423438279635.png

In Publish Settings -> Project -> Quiz -> Reporting, set the Standard form to SCORM 2004, and then click the Configure button.

Give the course an identifier and title

wpid897-media_1423438365903.png

You can optionally give your course an identifier and title. If you skip this step, then all of your courses will have the default identifier “Captivate E-Learning Course” and it will be hard to differentiate between the different courses.

Export from Captivate

wpid899-media_1423440309394.png

From the File menu, choose Publish… and make sure the eLearning output value is set to SCORM 2004, which you should have already set in the previous step. Click the Publish button and it will create a .zip file in the location you specified.

Upload to SCORMCloud

wpid900-media_1423441153174.png

Create a free account at SCORMCloud.com and from the Add Content section of the dashboard, upload the .zip file that you just created.

Tell SCORM Cloud to make it available via LTI

wpid901-media_1423441286007.png

Once the content has been uploaded, you need to create a dispatch that will be used to make it available via LTI.

Create a dispatch

wpid902-media_1423441822304.png

If the first time, click Create New Destination.

Create a destination

wpid903-media_1423441862466.png

Enter ‘Open EdX’ as the name. On subsequent times you can click Add Existing Destination and choose Open EdX. This name can really be anything – it doesn’t need to be ‘Open edX’.

Export dispatch as BLTI

wpid904-media_1423442143729.png

From the Dispatch screen, click on the Dispatch that you just created, and under Export your dispatch, click the BLTI button

Capture the BLTI information

wpid905-media_1423442253917.png

You should see a pop-up with information about the BLTI endpoint URL, key and secret key. Make a note of the URL, Key, and Secret that it displays

Enable LTI in your Open edX course

wpid906-media_1423446914909.png

Login to edX Studio and go to your course, and select Settings -> Advanced Settings. For the Advanced Module List, add “lti” to enable adding LTI components to your course.

Add an LTI passport entry

wpid907-media_1423447130599.png

On the same Advanced Settings page, scroll down to the section entitled LTI Passports, and add a new entry in the form:
lti_id:client_key:client_secret where the client_key and client_secret are the key and secret that you got when you made the BLTI dispatch in SCORM Cloud. The LTI ID can be anything (no spaces) – it’s just an identifier that will be used when we add an individual LTI component to our edX course.

So for our last example, it would be:

[
“pictureperfect:5f82ddc4-ff27-45ab-b2ab-02fa929e9e34:MHDEcqX4HTz6ZOk6FJNpdxxxxxxxxxxxxxx”
]

Note: you need to add a new entry for each course in SCORM Cloud that you want to embed in an edX course. It’s annoying that this is an extra step required for each item. Open edX assumes that an LTI provider will have one key/secret combo for all courses, but SCORM Cloud uses a different one for each course.

Add a new LTI component in your course

wpid908-media_1423447746722.png

Go to your course outline: Content -> Outline and navigate to where you want to add the LTI component containing your SCORM Cloud exercise/quiz. Click the New Unit button.

Add an advanced component

wpid909-media_1423447853259.png

If you’ve enabled Advanced Components for your course, you should see the Advanced Component button appear when you add a new unit.

Choose the LTI component

wpid910-media_1423447963126.png

Edit the component to set the values

wpid911-media_1423448041431.png

Configure the LTI component settings

wpid912-media_1423448220826.png

For LTI Application Information, you can type “SCORM Cloud” or whatever you want in this field. For LTI ID, this needs to match exactly, the LTI ID that you used in the LTI passports entry. The LTI URL for SCORM Cloud should be set to http://cloud.scorm.com/sc/blti

The Open in New Page setting should be set to True.

More LTI component settings

wpid916-media_1423449382811.png

Make sure that you set Scored to be True and assign a weight that you want it to have on the total grade. You can also set a Display Name if you want.

Preview or Publish your changes

media_1424039696280.png

Once you’ve added the component, you need to click the Preview Changes button or Publish it to your course.

Login as a student and try the exercise

wpid914-media_1423449098174.png

Now we can login as a student and test it out. When you navigate to the page, there is a link “View resource in a new window”. Click on this button to launch the SCORM Cloud exercise in a new window.

Take the exercise

wpid913-media_1423448750594.png

You then take the exercise and SCORM Cloud is tracking your activity.

Complete the exercise

wpid915-media_1423449188409.png

At the end of the exercise, it will compute a score for you, and if the score is above the threshhold to get a passing grade, then get a “Pass” grade.

See score in edX progress page

media_1424034133393.png

This grade is sent back into edX and can be seen on the student’s Progress page. Please note that these scores don’t match because I took a screenshot of a different student.

Caveats

1. Make sure that your Open edX site is at an internet address that can be reached by SCORM Cloud. This means basic auth must be disabled and if you’re using SSL, that you have a real SSL certificate, not a self-signed certificate.

2. A fix to the edX code which parses the SCORM Cloud request handle a non-standard (well, old standard) XML namespace that it uses. https://github.com/jazkarta/edx-platform/commit/ebcf93002557176102df2bf93ab79a979aaf0fba

3. A fix for edX’s validation of the oauth signature (https://github.com/edx/edx-platform/pull/5016)

 

Get involved

We invite you to join the conversation on the edx-code mailing list about SCORM support in edX, and feel free to reach out and contact us if you are looking for professional help with edX.

Or if you want to take Open edX for a test-drive, you can get a free trial from our partner Appsembler.

The Many Options for Hosting a Plone Site

February 4, 2015

Over the last year there has been a lot of work on hosting tools in the Plone community. Here at Jazkarta, most of our client’s sites are hosted using Amazon Web Services and OpsWorks because this platform’s robust tool set and high level of flexibility is a good fit for sites like The Mountaineers and KCRW. But there are many other options, especially for smaller sites.

To provide a forum to discuss all the recent activity, I hosted a panel discussion on Plone hosting at the 2014 Plone Conference. We covered Ansible, OpsWorks, Nix, Docker, OpenVZ, Heroku, and some PaaS providers. The slides from the session provide lots of information about all these options, including links, pros and cons.

 

You can also watch a video of the panel discussion on Vimeo.

 

How to Create a Badge System in Plone

January 30, 2015

With thousands of volunteers taking and teaching courses, climbing peaks, and donating time and money for environmental advocacy and youth programs, The Mountaineers needed a way to recognize their members’ contributions and skill levels. To let them do this, we created a badge system for their Plone website:

Mountaineers Badges

Members who complete milestones like graduating from a course or summiting a set of peaks are awarded a badge, which shows on their profile page.

Mountaineers Profile Badges

This feature was easy to implement because of another Plone add-on, collective.workspace, which we had already created to provide roster functionality for The Mountaineers’ courses, activities, and committees, allowing each participant’s role, registration status, and other information to be managed. Here are instructions for how to create a badge system on your own Plone site with the help of collective.workspace. The screenshots are from a Plone 5 site but this will also work in Plone 4.

  1. Add collective.workspace to your buildout:
    [instance]
    eggs = collective.workspace
  2. Start Plone, go to the Add-ons control panel in Site Setup, and activate collective.workspace.

  3. Go to the Dexterity Types control panel and add a Badge content type.
    cw_addtype
  4. Add an image field to hold the badge image.
    Screen Shot 2015-01-30 at 10.42.39 AM
  5. Go to the Behaviors tab and enable the “Team workspace” behavior.
    Screen Shot 2015-01-30 at 10.44.21 AM
  6. Now add a Badge to the site.
    Screen Shot 2015-01-30 at 10.47.51 AM
  7. There’s now a Roster link on the toolbar when viewing the badge.
    Screen Shot 2015-01-30 at 10.48.47 AM
  8. On the roster, we can add a user, indicating that they have been granted the badge.
    Screen Shot 2015-01-30 at 10.52.57 AM
    (Position is more useful for a committee roster. Selecting the Admins group gives the user permission to help manage this roster.)
  9. Now the user shows up on the badge roster.
    Screen Shot 2015-01-30 at 10.53.29 AM
  10. This isn’t particularly useful unless we make the user’s badges show up somewhere. Customize the author template and add this TAL:
    <div>
      <h2>Badges</h2>
      <tal:b repeat="badge python:context.portal_catalog(portal_type='badge', workspace_members=username)">
        <img src="${badge/getURL}/@@images/image/thumb" class="image-left" title="${badge/Title}" />
      </tal:b>
      <br style="clear: both;" />
    </div>
  11. Now the assigned badges show up on a user’s profile!
    Screen Shot 2015-01-30 at 11.12.03 AM

That’s it! Enjoy your new badge system!

Another Plone Site on AWS OpsWorks

January 26, 2015

Congratulations to YES! Magazine on the launch of their redesigned website!

YES! Magazine

This award winning, ad-free, non-profit online (and print) magazine for progressive thinkers runs on Plone, our favorite CMS. Originally launched in 2009 by our friends at Web Collective and Groundwire (RIP), this is the first redesign the site has had since then. The focus was squarely on mobile usability (since 40% of site visits are now mobile) and social engagement. It looks great!

All the planning, analysis, and design work for the new site was done in-house; the Diazo theme implementation and site upgrade were done by Bryan Wilson, formerly of Web Collective. We helped a little bit as well: the YES! technical staff decided to host the new site on Amazon’s AWS cloud platform using the OpsWorks recipes for Plone and Python developed by Alec Mitchell. Alec walked Bryan through some of the technical details of the deployment stack and he was off and running.

We’re very pleased our Plone-OpsWorks contributions helped YES! Magazine!

Botany on the Web

January 8, 2015

Here at Jazkarta we really enjoy innovative uses of web technology, especially in the service of non-profit and educational projects. Go Botany! and Go Orchids! were two such projects – they harnessed Javascript, Python, Django, and Solr to create user friendly plant identification tools. Go Botany! was created by the New England Wild Flower Society with funding from the National Science Foundation, and it was awarded the 2013 NEEEA Maria Pirie Award for Environmental Education Program of the Year. This major award  recognizes outstanding environmental education programs that demonstrate innovation and creativity, have been implemented broadly, and can easily be replicated in other regions.

As proof of that last point, Go Orchids! – created by by the North American Orchid Conservation Center - was able to adopt the Go Botany! code base (which had been released as open source) and create a site focused on orchid identification in a relatively short period of time. Last fall Go Orchids! produced a video introducing the tool to new users:

 

See our earlier Go Botany! blog post for more information about the technology behind these sites. For an in-depth look at the system architecture, watch this Djangocon 2014 presentation by Brandon Rhodes (start at minute 18:54).

We are so pleased to have played a part in both these projects. And we’d love to do more of them! Go Orchids! contributed improvements to the code base that make the application more flexible and easier to adapt. Any organizations interested in having a plant identification website tailored to a particular region or taxon should contact us or the New England Wildflower Society.

New KCRW Website Launched

June 17, 2014

KCRW is Southern California’s flagship National Public Radio affiliate, featuring an independent mix of music, news, and cultural programming supported by more than 55,000 member/subscribers. The non-commercial broadcast signal is simulcast on the web at KCRW.com, along with an all-music and an all-news stream and an extensive selection of podcasts. KCRW was a pioneer in online radio, they’ve been live streaming on the web since 1995.  The station has an international listening audience, and is widely considered one of the most influential independent music radio stations around the globe.

KCRW has used the Plone CMS for 7 years and for the last 6 the website has been running on Plone version 2.5, which has been increasingly showing its age. Over a year ago KCRW embarked on a project to redesign the site and upgrade the software and functionality, and they selected Jazkarta for the technical implementation. It was an amazingly fun and challenging project, with lots of interactive multimedia functionality and a beautiful responsive design by the New York City firm Hard Candy Shell. We’re very excited to announce that the site launched last Monday!

KCRW.com

Making the Most of the CMS

Plone is a robust and feature rich enterprise CMS with many features and add-ons that were invaluable for developing the KCRW site. Some highlights:

  • Flexible theming – Using Diazo, a theme contained in static HTML pages can be applied to a dynamic Plone site. For KCRW, Hard Candy Shell created a fully responsive design with phone, tablet and desktop displays. Jazkarta applied the theme to the site using Diazo rules, making adjustments to stylesheets or dynamically generated views where necessary so the CSS classes matched up.
  • Modular layouts – We used core Plone portlets and collective.cover tiles to build a system of standard layouts available as page fragments. Many custom tiles and portlets are based on the same underlying views so editors can easily reuse content fragments throughout the site. Plone portlets are particularly handy because of the way they can be inherited down the content tree – for example allowing the same promotion or collection of blog posts to be shown on all the episodes and segments within a show.
  • Managing editors – Plone provides granular control over editing permissions. For KCRW, this means that administrators can control what different types of users are allowed to edit in different parts of the site.
  • Customizable forms – We created PloneFormGen customizations to track RSVPs, signups, and attendance at events.
  • Salesforce integration – Plone has an excellent toolkit for integration with the Salesforce.com CRM. For this phase of the project we implemented basic integration. Stay tuned for additional KCRW member features to be added this fall that take advantage of the Plone-Salesforce connection.

Supporting a Radio Station

KCRW is a radio station, and we developed some cool features to support their content management process and all the rich media available on the site.

  • A set of custom content types (shows, episodes, segments, etc.) and APIs for scheduling radio programs and supporting live and on demand audio and video. The APIs provide all sorts of useful information in a consistent way across lots of contexts, including mobile and tablet applications.
  • An “always on top” player built using AJAX page loading – as you navigate around the site it just keeps playing and the controls continue to show. This works equally well on mobile devices and desktops.
  • Central configuration of underwriting messages in portlets using responsive Google DFP tags.
  • Integration with many external services like Disqus for threaded comments, Zencoder for audio encoding, Ooyala for video hosting and encoding, and Castfire for serving podcasts and live streams with advertising.
  • An API for querying data about songs played on the station – live or on demand.  The API is built on the Pyramid framework and queries a pre-existing data source.

A Robust Deployment Platform

More than any other client, KCRW’s site provided the impetus for us to adopt AWS OpsWorks. KCRW.com had been hosted on a complex AWS stack with multiple servers managed independently. We needed an infrastructure that was easier to manage and could even save KCRW money by being easily scaled up and down as needed. Another major concern was high availability and we tried to eliminate single points of failure.

To accomplish this we made sure everything on OpsWorks was redundant. We have multiple instances for nearly every piece of functionality (Plone, Blob Storage, Celery, Memcached, Nginx, Varnish and even HAProxy), and the redundant servers are in multiple Availability Zones so the website can withstand the failure of an entire Amazon AZ.  The layers of functionality can be grouped onto a single instance or spread across multiple instances. It’s easy to bring up and terminate new instances as needed; this can be done manually, or automated based on instance load or during specific times of day. Time-based scaling is particularly relevant to KCRW and we are still experimenting with how best to schedule extra servers during popular weekday listening hours. Amazon’s Elastic Load Balancer and Multi-AZ RDS services give us the ability to deploy resources in multiple Availability Zones and eliminate single points of failure.

Dynamic Client, Dynamic Site

All of these technical details are fun for developers to talk about, but what’s really impressive is how much fun the site is to look at and use. Kudos to KCRW for having the vision to create such a great site, and to KCRW staff for the appealing new content that appears every day.

Scalable Python Application Deployments on AWS OpsWorks

June 3, 2014

Here at Jazkarta we’ve been creating repeatable deployments for Plone and Django sites on Amazon’s AWS cloud platform for 6 years. Our deployment scripts were built on  mr.awsome (a wrapper around boto, Python’s AWS library) and bash scripts. Our emphasis was on repeatable, not scalable (scaling had to be done manually), and our system was a bit cumbersome. There were aspects that were failure prone, and the initial deployment of any site included a fair amount of manual work. When we heard about AWS OpsWorks last summer we realized it would be a big improvement to our deployment process.

OpsWorks

OpsWorks is a way to create repeatable, scalable, potentially multi-server deployments of applications on AWS. It is built on top of Chef, an open source configuration management tool. OpsWorks simplifies the many server configuration tasks necessary when deploying to the cloud; it allows you  to create repeatable best practice deployments that you can modify over time without having to touch individual servers. It also lets you wire together multiple servers so that you can deploy a single app in a scalable way.

OpsWorks is a bit similar to PaaS offerings like Heroku but it is better suited for sites that need more scalability and customization than Heroku can provide. The cost of Heroku’s point and click simplicity is a lack of flexibility – OpsWorks lets you change things and add features that you need. And unlike the PaaS offerings, there is no charge for OpsWorks – you don’t pay for anything besides the AWS resources you consume.

Briefly, here’s how it works.

  • First you create a stack for your application, which is a wrapper around everything. It may include custom chef recipes and configuration and it defines the AMI that will be used for all the instances. There are currently 2 choices, latest stable Ubuntu LTS or Amazon Linux. (We use Ubuntu exclusively.)
  • Within the stack you define layers that represent the services or functionality your app requires. For example, you might define layers for an app server, front end, task queue, caching, etc. The layers define the resources they need – Elastic IPs, EBS volumes, RAID10 arrays, or whatever.
  • Then you define the applications associated with the layers. This is your code, which can come from a github repo, an S3 bucket, or wherever.
  • Then you define instances, which configure the EC2 instances themselves (size, availability zone, etc.), and assign the layers to the instances however you want. When you define an instance it is just a definition, it does not exist until it is started. Standard (24-hour) instances are started and stopped manually,  time-based instances have defined start and stop days and times, and load-based instances have customizable CPU, load or memory thresholds which trigger instances to be started and stopped. When an instance starts, all the configuration for all the layers is run, and when it is stopped all the AWS resources associated with it are destroyed – aside from persistent EBS storage or Elastic IPs which are bound to the definition of the instance in OpsWorks instead of being bound to an actual instance.
OpsWorksLayers

Managing Layers

 

time-based-instances

Managing a time-based autoscaling instance

 

load-based-instances

Managing a load-based autoscaling instance

For more details and a case study about switching from Heroku, see this excellent introduction to OpsWorks by the folks at Artsy.

What We Did

OpsWorks has native support for deploying Ruby on Rails, NodeJS, Java Tomcat, PHP, and static HTML websites, but no support for Python application servers (perhaps partly because there is no standard way to deploy Python apps). This was a situation we thought needed to be remedied. Furthermore, few if any PaaS providers are suitable for deploying the Plone CMS which many of our clients use. Because OpsWorks essentially allows you to build your own deployment platform using Chef recipes, it seemed like it might be a good fit.

Chef is a mature configuration management system in wide use, and there are many open source recipes available for deploying a variety of applications. None of those quite met our needs in terms of Python web application deployment, so we wrote two new cookbooks (a bundle Chef recipes and configuration). We tried to structure the recipes to mimic the built in OpsWorks application server layer cookbooks.

The repository is here: https://github.com/alecpm/opsworks-web-python. Each cookbook has its own documentation.

  • Python Cookbook – provides recipes to create a Python environment in a virtualenv, to deploy a Django app, and to deploy a buildout
  • Plone Cookbook – builds on the Python and buildout cookbooks to deploy scalable and highly available Plone sites

The Plone cookbook can handle complex Plone deployments. An example buildout is provided that supports the following layers:

  • Clients and servers and their communication
  • Load balancing
  • Shared persistent storage for blobs
  • Zeoserver
  • Relstorage – either via Amazon RDS or a custom database server in its own OpsWorks layer (there is a built in layer for MySQL)
  • Solr
  • Redis and Celery
  • Auto-scaling the number of Zeo clients from the AWS instance size

The recipes handle automatically interconnecting these services whether they live on a single instance or on multiple instances across different Availability Zones. For more information, see the README in each cookbook.

 What’s Next

We’ve used OpsWorks with our custom recipes on a few projects so far and are quite happy with the results. We have a wishlist of a few additional features that we’d like to add:

  • Automated rolling deployments – a zero down time rolling deployment of new code that updates each Zeo client in sequence with a pause so the site doesn’t shut down.
  • Native Solr support – use the OS packages to install Solr (instead of buildout and collective.recipe.solr) and allow custom configuration for use with alm.solrindex or collective.solr in the recipe.
  • Integration of 3rd party (New Relic, Papertrail, …) and internal (munin, ganglia, …) monitoring services.
  • Better documentation – we need feedback about what needs improvement.

If you’d like to contribute new features or fixes to the cookbooks feel free to fork and issue a pull request!

Follow

Get every new post delivered to your Inbox.

Join 328 other followers

%d bloggers like this: