Forum Discussion
Using OpenAI from within Articulate Storyline - a working Story
The openai node.js solution (and an equivalent python/flask solution) are described at: https://platform.openai.com/docs/quickstart/build-your-application. (I built and ran the python example a couple of weeks back. It is 'equivalent' to a node.js solution.)
The openai node.js library (including an outline of how it works) is here: https://www.npmjs.com/package/openai
With the node.js solution, the OpenAI key is saved in an environment variable accessible only to that built server process. So it is secure. But - assuming a multi-user host - you would need to run one server process for each user (each one with a unique IP port) if you intend to run a different proxy for each user key. (You can of course use a single corporate key instead of personal keys. Or you can develop your node.js server to store multiple keys.)
The node.js solution might work if you want to build a small web application. In effect you are building proxy web server(s), which communicate with the OpenAI server using your key(s).
For discussion - I suggest the node.js solution might not be so well-suited to a (single-user host) SCORM or xAPI target. These targets are the broad output of Articulate tools. (While it would work on handhelds, I'm not sure how users would fare on those devices, also targeted by Storyline and Rise.)
If the community is saying node.js is the only acceptable manner, then there are implications for how Articulate might provision OpenAI in Storyline / Rise. Which is clearly wanted.
The problem is not accessing the OpenAI APIs. My concept demo shows that can be done. And I think Chris has a solution along the lines of node.js, and/or see the openai tutorials. One problem is storing the OpenAI key securely, as others have focused on. I think the main question should be on integrating OpenAI use cases within SCORM deliverables.
Feedback and corrections welcome.