Yes. MatCraft supports custom objective functions for cases where the built-in domain physics models do not cover your needs. You can define objectives as Python functions, external scripts, or API endpoints.
The most common approach is to write a Python function that takes a composition dictionary and returns objective values:
from materia.evaluate import Evaluator
from materia import Campaign, Material
class MyEvaluator(Evaluator):
"""Custom evaluator for polymer blend properties."""
def evaluate(self, composition: dict) -> dict:
# Your custom logic here
polymer_a = composition["polymer_a_fraction"]
polymer_b = composition["polymer_b_fraction"]
filler = composition["filler_loading"]
# Could be an analytical model, a simulation call, or a lookup
tensile_strength = self._run_simulation(polymer_a, polymer_b, filler)
elongation = 150 - 200 * filler + 50 * polymer_b
cost = 10 * polymer_a + 25 * polymer_b + 5 * filler
return {
"tensile_strength": tensile_strength,
"elongation_at_break": elongation,
"material_cost": cost,
}
def _run_simulation(self, a, b, f):
# Call your simulation code, external binary, or API
import subprocess
result = subprocess.run(
["./my_sim", str(a), str(b), str(f)],
capture_output=True, text=True
)
return float(result.stdout.strip())
material = Material.from_yaml("my_blend.yaml")
campaign = Campaign(
material=material,
evaluator=MyEvaluator(),
max_iterations=15,
)
campaign.run()For simple closed-form expressions, use the built-in analytic evaluator without writing a class:
evaluator:
type: analytic
expressions:
tensile_strength: "120 * polymer_a + 80 * polymer_b + 200 * filler"
cost: "10 * polymer_a + 25 * polymer_b + 5 * filler"If your simulation runs as a web service:
evaluator:
type: http
url: "https://my-simulation-server.com/evaluate"
method: POST
headers:
Authorization: "Bearer ${SIM_API_TOKEN}"
timeout: 300 # secondsMatCraft will POST the composition as JSON and expect a JSON response with objective values.
For physical experiments, set the evaluator to manual. The campaign will pause at each iteration, display the proposed candidates on the dashboard, and wait for you to enter measured values before continuing:
evaluator:
type: manualThis is the recommended mode for lab-based optimization where each "evaluation" is a physical experiment.