Studies and Solvers

COMSOL Multiphysics® version 5.2a includes new and updated solvers, added support for absorbing layers for wave propagation in the time domain, a new Multiphysics table for enabling and disabling multiphysics couplings in a study, and more. Browse all of the COMSOL Multiphysics® version 5.2a updates pertaining to studies and solvers below.

New Smoothed Aggregation Algebraic Multigrid (AMG) Solver

A new state-of-the-art algebraic multigrid solver (AMG), the Smoothed aggregation AMG solver, can be useful for a wide range of applications. This new solver is better suited for problems with strong couplings between field variables, such as linear elasticity for structural analysis, than the previously available classical AMG solver. The main benefit of the AMG method compared to the geometrical multigrid (GMG) method is that no mesh generation is needed for the coarser grid levels. This is beneficial for large CAD models where creating a coarse mesh may be challenging or impossible.

The mesh for a structural analysis of a frame with bracket. As shown, a coarser mesh would be impossible to create for this model. The model has 250,000 quadratic tet elements and 1282k DOFs. The solution process requires 51 iterations with the Conjugate Gradients solver using the new Smoothed aggregation AMG as a preconditioner. The solution time is 65 seconds and the memory requirement is 3.5 GB on a workstation with an Intel® Xeon® processor E5-1650 3.5 GHz.

The mesh for a structural analysis of a frame with bracket. As shown, a coarser mesh would be impossible to create for this model. The model has 250,000 quadratic tet elements and 1282k DOFs. The solution process requires 51 iterations with the Conjugate Gradients solver using the new Smoothed aggregation AMG as a preconditioner. The solution time is 65 seconds and the memory requirement is 3.5 GB on a workstation with an Intel® Xeon® processor E5-1650 3.5 GHz.

The mesh for a structural analysis of a frame with bracket. As shown, a coarser mesh would be impossible to create for this model. The model has 250,000 quadratic tet elements and 1282k DOFs. The solution process requires 51 iterations with the Conjugate Gradients solver using the new Smoothed aggregation AMG as a preconditioner. The solution time is 65 seconds and the memory requirement is 3.5 GB on a workstation with an Intel® Xeon® processor E5-1650 3.5 GHz.

The smoothed aggregation AMG method functions by clustering nodes of degrees of freedom (DOFs) into aggregates based on a connectivity criterion. Each aggregate then becomes a new node on the next multigrid level, and the algorithm proceeds until either a certain number of levels has been reached or until the number of DOFs is sufficiently small.

The Introduction to COMSOL Multiphysics manual contains a detailed step-by-step instruction of a mesh convergence analysis using the new Smoothed aggregation AMG solver.

The computed stresses from a structural analysis of a frame with bracket and the Settings window for the Smoothed aggregation AMG solver.

The computed stresses from a structural analysis of a frame with bracket and the Settings window for the Smoothed aggregation AMG solver.

The computed stresses from a structural analysis of a frame with bracket and the Settings window for the Smoothed aggregation AMG solver.

A structural analysis of a hard-tail aluminum mountain bike frame. The model has 194,000 quadratic tet elements and 1157k DOFs. The solution process requires 117 iterations with the Conjugate Gradients solver using the new Smoothed aggregation AMG as a preconditioner. The solution time is 96 seconds and the memory requirement is 3.1 GB, on a workstation with an Intel® Xeon® processor E5-1650 3.5 GHz.

A structural analysis of a hard-tail aluminum mountain bike frame. The model has 194,000 quadratic tet elements and 1157k DOFs. The solution process requires 117 iterations with the Conjugate Gradients solver using the new Smoothed aggregation AMG as a preconditioner. The solution time is 96 seconds and the memory requirement is 3.1 GB, on a workstation with an Intel® Xeon® processor E5-1650 3.5 GHz.

A structural analysis of a hard-tail aluminum mountain bike frame. The model has 194,000 quadratic tet elements and 1157k DOFs. The solution process requires 117 iterations with the Conjugate Gradients solver using the new Smoothed aggregation AMG as a preconditioner. The solution time is 96 seconds and the memory requirement is 3.1 GB, on a workstation with an Intel® Xeon® processor E5-1650 3.5 GHz.

Application Library path:

Structural_Mechanics_Module/Applications/bike_frame_analyzer_llsw

New Direct Solver for Clusters

A new direct solver for clusters has been added: the Parallel Direct Sparse Solver for Clusters from the Intel® Math Kernel Library software product. This solver is automatically chosen when selecting the PARDISO option when running models on clusters. The PARDISO solver, used for shared-memory computations, is also available in the Intel® Math Kernel Library software product. In previous versions, when selecting the PARDISO solver option while running your model on a cluster, the MUMPS solver was used instead, due to the lack of an alternative direct solver on clusters. You can still revert to the old method by deselecting the Parallel Direct Sparse Solver for Clusters check box.

Upgraded MUMPS Solver

The direct MUMPS solver has been upgraded and provides significantly better performance thanks to a new implementation of OpenMP® API parallelism.

Optimized Domain Decomposition Solver

The Domain Decomposition solver has been refined and optimized for handling large problems, especially for strongly coupled multiphysics phenomena where, previously, a direct solver was the only option.

  • The solver uses the METIS algorithm for its domain partitioning, by default.
  • The solver has been improved by adding an optimized setup phase and includes more efficient communication when running on clusters.
  • The coarse grid for this solver can now be set up using algebraic methods (AMG). This is preferred, because very coarse grids can be used and the technique does not require mesh generation (for which coarse-level generation can fail for complicated CAD models).

Application Library path for an example that uses the Optimized Domain Decomposition Solver:

Acoustics_Module/Tutorials/transfer_impedance_perforate

Velocity and the total acoustic pressure in the transfer impedance of a perforate model. The model is solved with 18 GMRES iterations precoditioned with the Domain Decomposition method. The method has automatically divided the computation into 30 subdomains using 10 domain groups. The subdomains are solved for with a direct solver, which is the only viable solver for this strongly coupled problem. The computations require 14.3 GB of RAM by recomputing and clearing the LU factors in between the subdomain solutions steps. The computation takes 1 hour and 21 minutes to finish. The total number of DOFs is 2579k and 409k tetrahedral elements are used. As a comparison, the memory requirement when using a direct solver is 120 GB. Velocity and the total acoustic pressure in the transfer impedance of a perforate model. The model is solved with 18 GMRES iterations precoditioned with the Domain Decomposition method. The method has automatically divided the computation into 30 subdomains using 10 domain groups. The subdomains are solved for with a direct solver, which is the only viable solver for this strongly coupled problem. The computations require 14.3 GB of RAM by recomputing and clearing the LU factors in between the subdomain solutions steps. The computation takes 1 hour and 21 minutes to finish. The total number of DOFs is 2579k and 409k tetrahedral elements are used. As a comparison, the memory requirement when using a direct solver is 120 GB.
Velocity and the total acoustic pressure in the transfer impedance of a perforate model. The model is solved with 18 GMRES iterations precoditioned with the Domain Decomposition method. The method has automatically divided the computation into 30 subdomains using 10 domain groups. The subdomains are solved for with a direct solver, which is the only viable solver for this strongly coupled problem. The computations require 14.3 GB of RAM by recomputing and clearing the LU factors in between the subdomain solutions steps. The computation takes 1 hour and 21 minutes to finish. The total number of DOFs is 2579k and 409k tetrahedral elements are used. As a comparison, the memory requirement when using a direct solver is 120 GB.

Nonreflecting Absorbing Layers for Time-Dependent Wave Simulations

Built-in support for absorbing layers for wave propagation in the time domain has been introduced, using the nodal discontinuous Galerkin method. Absorbing layers are used as nonreflecting boundary conditions, created by adding extra subdomains with an absorbing layer property outside of the computational region of interest; the layers are stretched by a coordinate transformation and the waves are damped by filter techniques. For the outer boundary of the absorbing layers, a local low-reflecting boundary condition is used.

This technique effectively reduces reflections coming out from the layer, giving a general-purpose method for reducing the computational domain for scattering problems and for other problems where the nonreflecting boundary conditions are needed.

Application Library paths for examples that use the new discontinuous Galerkin method:

Acoustics_Module/Ultrasound/ultrasound_flow_meter_generic

Acoustics_Module/Tutorials/gaussian_pulse_absorbing_layers

A Gaussian pressure pulse in the symmetry plane of a model where the waves are absorbed in the absorbing layers at the left and right side of the main flow channel using the newly introduced discontinuous Galerkin method. A Gaussian pressure pulse in the symmetry plane of a model where the waves are absorbed in the absorbing layers at the left and right side of the main flow channel using the newly introduced discontinuous Galerkin method.
A Gaussian pressure pulse in the symmetry plane of a model where the waves are absorbed in the absorbing layers at the left and right side of the main flow channel using the newly introduced discontinuous Galerkin method.

Parametric Sweeps in Batch Mode Using a List of Parameters

You can now run a sweep using a list of parameter values as input without defining them in the user interface. This functionality has previously only been available by configuring a sweep using COMSOL Desktop® through a Parametric Sweep. The sweep runs for each parameter value and stores the results in a separate file. The list can also be read from file.

An example of a batch command with a list of two parameters used as input arguments:

comsolbatch.exe -inputfile feeder_clamp.mph -pname D,d -plist 7,3.75,8,4,9,4.09,10,4.12,11,4.89,12,4.5

An example of the same sweep, but instead using a file parameters.csv for specifying the list of parameters:

comsolbatch.exe -inputfile feeder_clamp.mph -paramfile parameters.csv

Specifying Number of Sockets

In addition to the setting for specifying the number of cores, COMSOL Multiphysics® now has a new option for specifying the number of sockets used on a multisocket computer. This setting is available in the Multicore and Cluster Computing section of the Preferences window.

New Selection for Enabling and Disabling Multiphysics Couplings in a Study

In addition to the previously available table for physics interfaces to solve for, a new Multiphysics table allows you to selectively enable and disable available multiphysics couplings. This makes it easier to successively add complexity to a model while still using the preconfigured options for multiphysics couplings.

Enable and Disable Infinite Elements and Perfectly Matched Layers from a Study

The Definitions node in the model tree now allows you to enable and disable Infinite Element Domain and Perfectly Matched Layer nodes from a study. This is made available to you when you activate the Modify physics tree and variables option in the study.


Intel and Xeon are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries.

OpenMP and the OpenMP logo are registered trademarks of the OpenMP Architecture Review Board in the United States and other countries.