Question Details

No question body available.

Tags

python large-language-model

Answers (9)

March 10, 2026 Score: 0 Rep: 1 Quality: Low Completeness: 40%

One common approach is to use Python as a middle layer between AI system and the simulation software.

In this architecture, simulation functions are exposed as Python tools or modules.

The AI system then calls these tools when necessary.

For example : User instruction → AI reasoning → Python tool → simulation software → results.

This modular design has several advantages:

  • Easier debugging

  • Clear separation between AI logic and simulation execution

  • Better maintainability

Many AI Assisted engineering systems follow this pattern because it keeps the simulation engine independent from the AI model.

March 10, 2026 Score: 0 Rep: 1 Quality: Low Completeness: 80%

Thanks for your explanation. That makes sense.

In our current work, we are trying to build a Python layer that can translate natural language instructions from the AI assistant into Aspen Plus operations.

However, we noticed a few practical issues:

  • Aspen Plus through Python sometimes depends on COM interfaces, which can be unstable.

  • Some simulation parameters are difficult for AI to infer without domain knowledge.

  • The simulation workflow itself can be quite complex (multiple unit operations and recycle loops).

Because of this, we are considering whether tool interface should be more high-level, such as:

  • runsimulation()

  • setparameter(block, variable, value)

  • getstreamresults(stream_name)

instead of letting AI directly manipulate Aspen API.

Do you think designing higher-level simulation tools would make the AI interaction more reliable?

March 10, 2026 Score: 0 Rep: 1 Quality: Low Completeness: 20%

In many engineering automation systems, it is recommended to keep the AI component separate from the simulation engine. A typical workflow could be: 1. The user describes the task 2. The AI interprets the request 3. A middleware layer converts the request into simulation commands 4. The simulation software executes the task 5. The results are returned to the AI system for explanation This layered architecture helps reduce coupling between the AI model and the engineering software, which makes the system easier to maintain and extend.

March 10, 2026 Score: 0 Rep: 1 Quality: Low Completeness: 40%

Yes, in most AI-assisted engineering systems, higher-level tools are usually preferred.

Instead of exposing full Aspen API, it's better to wrap common operations into structured functions. For example:

  • Load simulation

  • Modify operating conditions

  • Run simulation

  • Extract results

This approach reduces complexity AI needs to handle.

Another benefit is that AI does not need to understand full internal structure of the Aspen model. It only needs to call predefined functions.

In many AI-engineering integrations, the architecture looks like this:

User → LLM → Tool layer (Python) → Simulation software → Results → LLM interpretation

This also makes debugging easier because simulation layer is separated from AI reasoning.

March 10, 2026 Score: 0 Rep: 1 Quality: Low Completeness: 40%

That architecture is very helpful.

We are also thinking about adding an additional interpretation layer after simulation finishes. For example, the workflow could become:

User instruction → AI reasoning → Python tool → Aspen simulation → Raw results → AI interpretation

The AI could then:

  • Summarize key performance indicators

  • Detect abnormal simulation results

  • Suggest parameter adjustments for the next run

One challenge we encountered is that literature often lack detailed process parameters, which makes it difficult for AI to generate accurate simulation setups automatically.

So we are considering a human in the loop approach, where engineers provide key process parameters and constraints, and AI focuses on workflow automation and result analysis.

March 10, 2026 Score: 0 Rep: 1 Quality: Low Completeness: 10%

Thanks for the suggestion. The layered workflow you described is very helpful. Keeping AI system separated from simulation engine through a middleware layer sounds like a practical way to make the system easier to manage and extend.

March 10, 2026 Score: 0 Rep: 1 Quality: Low Completeness: 30%

Another perspective from someone who has worked with process simulation automation.

In several projects involving process simulators (Aspen Plus, gPROMS, etc.), we found that AI systems work best when the simulation workflow is partially structured beforehand.

Instead of letting the AI freely explore the simulator, it is often useful to define a set of predefined workflows, for example:

process setup

parameter sweep

sensitivity analysis

optimization runs

The AI assistant can then decide which workflow to trigger, while the Python layer handles the detailed interaction with the simulator.

For Aspen Plus specifically, many people interact with it through COM automation or scripting interfaces, but these interfaces can sometimes be fragile when used dynamically by an AI system. Wrapping them inside a stable Python module is definitely a good design choice.

The human-in-the-loop approach you mentioned is also very important. In chemical engineering simulations, domain knowledge is often required to select reasonable operating conditions or convergence settings.

So a practical architecture could be:

User → LLM → workflow/tool selection → Python automation layer → Aspen Plus → results → AI interpretation → human validation.

March 10, 2026 Score: 0 Rep: 1 Quality: Low Completeness: 10%

Thank you for sharing your practical experience.

Defining predefined workflows and combining them with a human in the loop approach sounds very helpful for improving the reliability of AI-assisted simulations.

This gives us a clearer direction for designing our Python tools.

March 10, 2026 Score: 0 Rep: 1 Quality: Low Completeness: 0%

I agree with this point.

Structuring the workflow beforehand usually makes AI–simulation integration much more stable.

Good luck with your research!