Contenu principal

price

Compute price for equity instrument with AssetReinforcementLearning pricer

Since R2026a

Description

[Price,priceResultData] = price(inpPricer,inpInstrument) computes the equity option instrument price and related pricing information based on the pricing object inpPricer and the instrument object inpInstrument.

In addition, after having obtained a trained agent through the pricing procedure, you can price an option by reusing the trained agent. For an example of this workflow, see Use Trained Agent to Directly Price Option.

Note

The price function for the AssetReinforcementLearning pricer requires the installation of Reinforcement Learning Toolbox™.

example

[Price,priceResultData] = price(___,Name=Value) adds name-value arguments in addition to the required arguments in the previous syntax. For example, [Price,priceResultData] = price(AssetReinforcementLearningPricer,Vanilla,Agent="LSPI",Training=true) computes the equity option instrument price and related pricing information based on the pricing object inpPricer and the instrument object inpInstrument using the name-value arguments for Agent and Training.

example

Examples

collapse all

This example shows the workflow to price a Vanilla instrument with an "American" ExerciseStyle when using a BlackScholes model and an AssetReinforcementLearning pricing method. Note, to use this functionality, you must have Reinforcement Learning Toolbox™ installed.

Create Vanilla Instrument Object

Use fininstrument to create a Vanilla instrument object.

VanillaOpt = fininstrument("Vanilla",ExerciseDate=datetime(2021,8,15),Strike=110,OptionType="put",ExerciseStyle="american",Name="vanilla_option")
VanillaOpt = 
  Vanilla with properties:

       OptionType: "put"
    ExerciseStyle: "american"
     ExerciseDate: 15-Aug-2021
           Strike: 110
             Name: "vanilla_option"

Create BlackScholes Model Object

Use finmodel to create a BlackScholes model object.

BSModel = finmodel("BlackScholes",Volatility=0.2)
BSModel = 
  BlackScholes with properties:

     Volatility: 0.2000
    Correlation: 1

Create ratecurve Object

Create a ratecurve object using ratecurve.

Settle = datetime(2019,1,1);
Type = 'zero';
ZeroTimes = [calmonths(6) calyears([1 2 3 4 5 7 10 20 30])]';
ZeroRates = [0.0052 0.0055 0.0061 0.0073 0.0094 0.0119 0.0168 0.0222 0.0293 0.0307]';
ZeroDates = Settle + ZeroTimes;

myRC = ratecurve('zero',Settle,ZeroDates,ZeroRates)
myRC = 
  ratecurve with properties:

                 Type: "zero"
          Compounding: -1
                Basis: 0
                Dates: [10×1 datetime]
                Rates: [10×1 double]
               Settle: 01-Jan-2019
         InterpMethod: "linear"
    ShortExtrapMethod: "next"
     LongExtrapMethod: "previous"

Create AssetReinforcementLearning Pricer Object

Use finpricer to create an AssetReinforcementLearning pricer object and use the ratecurve object for the 'DiscountCurve' name-value pair argument.

SpotPrice = 100;
SimDates = [Settle+days(1):days(2):Settle+years(2)];

outPricer = finpricer("AssetReinforcementLearning",DiscountCurve=myRC,Model=BSModel,SpotPrice=SpotPrice,SimulationDates=SimDates)
outPricer = 
  AssetReinforcementLearning with properties:

      DiscountCurve: [1×1 ratecurve]
          SpotPrice: 100
    SimulationDates: [02-Jan-2019    04-Jan-2019    06-Jan-2019    08-Jan-2019    10-Jan-2019    12-Jan-2019    14-Jan-2019    16-Jan-2019    18-Jan-2019    20-Jan-2019    22-Jan-2019    24-Jan-2019    26-Jan-2019    28-Jan-2019    …    ] (1×365 datetime)
          NumTrials: 1000
              Model: [1×1 finmodel.BlackScholes]
       DividendType: "continuous"
      DividendValue: 0

Price Vanilla Instrument

Use price to compute the price for the Vanilla instrument.

[Price,priceResultData] = price(outPricer,VanillaOpt)
Price = 
16.4847
priceResultData = 
  priceresult with properties:

       Results: [1×1 table]
    PricerData: [1×1 struct]

priceResultData.PricerData
ans = struct with fields:
    SimulationTimes: [366×1 timetable]
              Paths: [366×1×1000 double]
      TrainingStats: [1×1 rl.train.result.rlTrainingResult]
              Agent: [1×1 rl.agent.rlLSPIAmericanOptionAgent]

This example shows the workflow to price a Vanilla instrument with an "American" ExerciseStyle when using a Heston model and an AssetReinforcementLearning pricing method. Note, to use this functionality, you must have Reinforcement Learning Toolbox™ installed.

Create Vanilla Instrument Object

Use fininstrument to create a Vanilla instrument object.

VanillaOpt = fininstrument("Vanilla",ExerciseDate=datetime(2021,8,15),Strike=110,OptionType="put",ExerciseStyle="american",Name="vanilla_option")
VanillaOpt = 
  Vanilla with properties:

       OptionType: "put"
    ExerciseStyle: "american"
     ExerciseDate: 15-Aug-2021
           Strike: 110
             Name: "vanilla_option"

Create Heston Model Object

Use finmodel to create a Heston model object.

HestonModel = finmodel("Heston",V0=0.032,ThetaV=0.1,Kappa=0.003,SigmaV=0.08,RhoSV=0.9)
HestonModel = 
  Heston with properties:

        V0: 0.0320
    ThetaV: 0.1000
     Kappa: 0.0030
    SigmaV: 0.0800
     RhoSV: 0.9000

Create ratecurve Object

Create a ratecurve object using ratecurve.

Settle = datetime(2019,1,1);
Type = 'zero';
ZeroTimes = [calmonths(6) calyears([1 2 3 4 5 7 10 20 30])]';
ZeroRates = [0.0052 0.0055 0.0061 0.0073 0.0094 0.0119 0.0168 0.0222 0.0293 0.0307]';
ZeroDates = Settle + ZeroTimes;

myRC = ratecurve('zero',Settle,ZeroDates,ZeroRates)
myRC = 
  ratecurve with properties:

                 Type: "zero"
          Compounding: -1
                Basis: 0
                Dates: [10×1 datetime]
                Rates: [10×1 double]
               Settle: 01-Jan-2019
         InterpMethod: "linear"
    ShortExtrapMethod: "next"
     LongExtrapMethod: "previous"

Create AssetReinforcementLearning Pricer Object

Use finpricer to create an AssetReinforcementLearning pricer object and use the ratecurve object for the 'DiscountCurve' name-value pair argument.

SpotPrice = 100;
SimDates = [Settle+days(1):days(2):Settle+years(2)];

outPricer = finpricer("AssetReinforcementLearning",DiscountCurve=myRC,Model=HestonModel,SpotPrice=SpotPrice,SimulationDates=SimDates)
outPricer = 
  AssetReinforcementLearning with properties:

      DiscountCurve: [1×1 ratecurve]
          SpotPrice: 100
    SimulationDates: [02-Jan-2019    04-Jan-2019    06-Jan-2019    08-Jan-2019    10-Jan-2019    12-Jan-2019    14-Jan-2019    16-Jan-2019    18-Jan-2019    20-Jan-2019    22-Jan-2019    24-Jan-2019    26-Jan-2019    28-Jan-2019    …    ] (1×365 datetime)
          NumTrials: 1000
              Model: [1×1 finmodel.Heston]
       DividendType: "continuous"
      DividendValue: 0

Price Vanilla Instrument

Use price to compute the price for the Vanilla instrument.

[Price,priceResultData] = price(outPricer,VanillaOpt)
Price = 
15.5194
priceResultData = 
  priceresult with properties:

       Results: [1×1 table]
    PricerData: [1×1 struct]

priceResultData.PricerData
ans = struct with fields:
    SimulationTimes: [366×1 timetable]
              Paths: [366×2×1000 double]
      TrainingStats: [1×1 rl.train.result.rlTrainingResult]
              Agent: [1×1 rl.agent.rlLSPIAmericanOptionAgent]

Use a trained agent (rl.agent.rlLSPIAmericanOptionAgent) from priceResultData.PricerData that you previously obtained from using the AssetReinforcementLearning pricer.

This example uses the agent (rl.agent.rlLSPIAmericanOptionAgent) that is created in the example Use AssetReinforcementLearning Pricer and BlackScholes Model to Price Vanilla Instrument with American ExerciseStyle.

Load the .mat file containing the priceResultData.PricerData.Agent.

load pricerdata.mat

Compute the price of the Vanilla option using the price function.

agent = priceResultData.PricerData.Agent;
pr = price(outPricer,VanillaOpt, Agent=agent, Training=false)
pr = 
16.4297

Input Arguments

collapse all

Pricer object, specified as a previously created AssetReinforcementLearning pricer object. Create the pricer object using finpricer.

Data Types: object

Instrument object, specified as a scalar, previously created instrument object. Create the Vanilla instrument object with an ExerciseStyle of "American" or "Bermudan" using fininstrument.

Data Types: object

Name-Value Arguments

collapse all

Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Example: [Price,priceResultData] = price(AssetReinforcementLearningPricer,Vanilla,Agent="LSPI",Training=true)

Reinforcement learning agent, specified as scalar string with a value of "LSPI" or the trained agent object (rl.agent.rlLSPIAmericanOptionAgent) returned by price.

Data Types: string | object

Flag to indicate if training process is needed before pricing, specified as a scalar logical.

Note

Training is always true when the Agent value is "LSPI".

Data Types: logical

Output Arguments

collapse all

Instrument price, returned as a numeric.

Price result, returned as a PriceResultData object. The object has the following fields:

  • PriceResultData.Results — Option price

  • PriceResultData.PricerData — Contains the following:

    • Trained agent —Trained built-in agent of rlLSPIAmericanOptionAgent

      Note

      Training must be conducted through the finpricer.price interface. Directly using the agent.train method is not supported.

      After having trained an agent using the workflow described in Price Vanilla Instrument with American ExerciseStyle Using BlackScholes Model and AssetReinforcementLearning Pricer, you can price an option by reusing the trained agent directly:

      agent = priceResultData.PricerData.Agent;
      pr = price(Pricer_2, financialInstrument_2, Agent=agent, Training=false)
      

      For an example of pricing an option by reusing the trained agent, see Use Trained Agent to Directly Price Option.

      Also, you can retrain the agent from the last trained status. For example:

      pr = price(Pricer_2, financialInstrument_2, Agent=agent, Training=true)

    • Training stats — Returned as "rlTrainingResult" when training has been done

    • Simulation times

    • Paths

More About

collapse all

Version History

Introduced in R2026a