Multi-objective optimization (MOO) is the process of simultaneously optimizing two or more conflicting objectives. Most real-world materials problems are inherently multi-objective — you want a material that is strong and lightweight, or conductive and cheap.
MatCraft handles multi-objective optimization through a combination of surrogate models, scalarization, and Pareto analysis:
A separate surrogate model (or output head) is trained for each objective. This means the model independently predicts water flux, salt rejection, and cost for a given composition. Each surrogate has its own uncertainty estimate.
CMA-ES natively optimizes a single scalar value. MatCraft converts multiple objectives into a scalar using one of these strategies:
score = w1 obj1_normalized + w2 obj2_normalized. Simple but can miss concave regions of the Pareto front.objectives:
- name: hardness
direction: maximize
weight: 0.6 # Optional; used for weighted scalarization
- name: cost
direction: minimize
weight: 0.4For the active learning loop, the acquisition function is extended to multi-objective settings using the Expected Hypervolume Improvement (EHVI). This measures how much a new candidate would expand the dominated volume of the Pareto front, naturally balancing all objectives.
After the campaign completes (or at any intermediate point), MatCraft extracts the Pareto front from all evaluated candidates. The front is available as a DataFrame, a plot, or an interactive visualization.
results = campaign.run()
# Get Pareto-optimal candidates
pareto = results.pareto_front()
print(pareto[["hardness", "cost", "iron", "chromium", "nickel"]])
# Find the "knee" point -- the solution with the best balance
knee = results.pareto_knee()
print(f"Balanced solution: {knee}")MatCraft supports 2-6 objectives efficiently. Beyond 6 objectives, the Pareto front becomes very large (most solutions are non-dominated in high dimensions), and the hypervolume computation becomes expensive. For many-objective problems (>6), consider grouping related objectives or using preference-based scalarization.