Catalogue Search | MBRL
Search Results Heading
Explore the vast range of titles available.
MBRLSearchResults
-
DisciplineDiscipline
-
Is Peer ReviewedIs Peer Reviewed
-
Item TypeItem Type
-
SubjectSubject
-
YearFrom:-To:
-
More FiltersMore FiltersSourceLanguage
Done
Filters
Reset
495
result(s) for
"60G40"
Sort by:
RANDOM WALKS IN CONES
2015
We study the asymptotic behavior of a multidimensional random walk in a general cone. We find the tail asymptotics for the exit time and prove integral and local limit theorems for a random walk conditioned to stay in a cone. The main step in the proof consists in constructing a positive harmonic function for our random walk under minimal moment restrictions on the increments. For the proof of tail asymptotics and integral limit theorems, we use a strong approximation of random walks by Brownian motion. For the proof of local limit theorems, we suggest a rather simple approach, which combines integral theorems for random walks in cones with classical local theorems for unrestricted random walks. We also discuss some possible applications of our results to ordered random walks and lattice path enumeration.
Journal Article
Some Strong Limit Theorems in Averaging
2024
The paper deals with the fast-slow motions setups in the discrete time
X
ε
(
(
n
+
1
)
ε
)
=
X
ε
(
n
ε
)
+
ε
B
(
X
ε
(
n
ε
)
,
ξ
(
n
)
)
,
n
=
0
,
1
,
.
.
.
,
[
T
/
ε
]
and the continuous time
d
X
ε
(
t
)
dt
=
B
(
X
ε
(
t
)
,
ξ
(
t
/
ε
)
)
,
t
∈
[
0
,
T
]
where
B
is a smooth in the first variable vector function and
ξ
is a sufficiently fast mixing stationary stochastic process. It is known since (Khasminskii in Theory Probab Appl 11:211–228, 1966) that if
X
¯
is the averaged motion then
G
ε
=
ε
-
1
/
2
(
X
ε
-
X
¯
)
weakly converges to a Gaussian process
G
. We will show that for each
ε
the processes
ξ
and
G
can be redefined on a sufficiently rich probability space without changing their distributions so that
E
sup
0
≤
t
≤
T
|
G
ε
(
t
)
-
G
(
t
)
|
2
M
=
O
(
ε
δ
)
,
δ
>
0
which gives also
O
(
ε
δ
/
3
)
Prokhorov distance estimate between the distributions of
G
ε
and
G
. This provides also convergence estimates in the Kantorovich–Rubinstein (or Wasserstein) metrics. In the product case
B
(
x
,
ξ
)
=
Σ
(
x
)
ξ
we obtain also almost sure convergence estimates of the form
sup
0
≤
t
≤
T
|
G
ε
(
t
)
-
G
(
t
)
|
=
O
(
ε
δ
)
a.s., as well as the Strassen’s form of the law of iterated logarithm for
G
ε
. We note that our mixing assumptions are adapted to fast motions generated by important classes of dynamical systems.
Journal Article
Markov Chains in the Domain of Attraction of Brownian Motion in Cones
2025
We consider a multidimensional Markov chain X converging to a multidimensional Brownian motion. We construct a positive harmonic function for X killed on exiting the cone. We show that its asymptotic behaviour is similar to that of the harmonic function of Brownian motion. We use the harmonic function to study the asymptotic behaviour of the tail distribution of the exit time$$\\tau $$τ of X from a cone.
Journal Article
Green Function for an Asymptotically Stable Random Walk in a Half Space
2024
We consider an asymptotically stable multidimensional random walk
S
(
n
)
=
(
S
1
(
n
)
,
…
,
S
d
(
n
)
)
. For every vector
x
=
(
x
1
…
,
x
d
)
with
x
1
≥
0
, let
τ
x
:
=
min
{
n
>
0
:
x
1
+
S
1
(
n
)
≤
0
}
be the first time the random walk
x
+
S
(
n
)
leaves the upper half space. We obtain the asymptotics of
p
n
(
x
,
y
)
:
=
P
(
x
+
S
(
n
)
∈
y
+
Δ
,
τ
x
>
n
)
as
n
tends to infinity, where
Δ
is a fixed cube. From that, we obtain the local asymptotics for the Green function
G
(
x
,
y
)
:
=
∑
n
p
n
(
x
,
y
)
, as
|
y
|
and/or
|
x
|
tend to infinity.
Journal Article
The last-success stopping problem with random observation times
by
Gnedin, Alexander
,
Derbazi, Zakaria
in
Hypergeometric functions
,
Mathematical analysis
,
Parameters
2025
Suppose N independent Bernoulli trials with success probabilities$$p_1, p_2,\\ldots $$p 1 , p 2 , … are observed sequentially at times of a mixed binomial process. The task is to maximise, by using a nonanticipating stopping strategy, the probability of stopping at the last success. The case$$p_k=1/k$$p k = 1 / k has been studied by many authors as a version of the familiar best choice problem, where both N and the observation times are random. We consider a more general profile$$p_k=\\theta /(\\theta +k-1)$$p k = θ / ( θ + k - 1 ) and assume that the prior distribution of N is negative binomial with shape parameter$$\\nu $$ν , so the arrivals occur at times of a mixed Poisson process. The setting with two parameters offers a high flexibility in understanding the nature of the optimal strategy, which we show is intrinsically related to monotonicity properties of the Gaussian hypergeometric function. Using this connection, we find that the myopic stopping strategy is optimal if and only if$$\\nu \\ge \\theta $$ν ≥ θ . Furthermore, we derive formulas to assess the winning probability and discuss limit forms of the problem for large N .
Journal Article
Some aspects of statistical causality
2025
Causal thinking is deeply embedded in scientific understanding of the problems of applied statistics. This can not always be done by experiments and the researcher is restricted to observing the system he wants to describe. This is the case in many fields, for example, in economics, demography, neuroscience, et cetera. In this paper we give different concepts of causality between σ- algebas and between Hilbert spaces, using conditional independence and conditional orthogonality, respectively, that can be applied on both stochastic processes and events. These definitions are based on Granger’s definition of causality which has great applications in economics (see Florens, Mouchart, 1982; Florens, Fougère, 1996; McCrorie, Chambers, 2006) and also in some other disciplines; for example, see a recent application in neuroscience (see Valdes-Sosa, Roebroeck, Daunizeau, Friston, 2011). The study of Granger’s causality has been mainly preoccupied with discrete time processes (i.e. time series). We shall instead concentrate on continuous-time processes. Many of systems to which it is natural to apply tests of causality, take place in continuous time. For example, this is generally the case within economy, demography, finance. The given definitions of causality extend the ones already given in the case of discrete-time processes. This paper represents a comprehensive survey of causality concepts between flows of information represented by filtrations and by Hilbert spaces. Also, there are given some new results in Section 4 and Section 5.
Journal Article
A ZERO-SUM GAME BETWEEN A SINGULAR STOCHASTIC CONTROLLER AND A DISCRETIONARY STOPPER
2015
We consider a stochastic differential equation that is controlled by means of an additive finite-variation process. A singular stochastic controller, who is a minimizer, determines this finite-variation process, while a discretionary stopper, who is a maximizer, chooses a stopping time at which the game terminates. We consider two closely related games that are differentiated by whether the controller or the stopper has a first-move advantage. The games' performance indices involve a running payoff as well as a terminal payoff and penalize control effort expenditure. We derive a set of variational inequalities that can fully characterize the games' value functions as well as yield Markovian optimal strategies. In particular, we derive the explicit solutions to two special cases and we show that, in general, the games' value functions fail to be C1. The nonuniqueness of the optimal strategy is an interesting feature of the game in which the controller has the first-move advantage.
Journal Article
Deep neural network expressivity for optimal stopping problems
2024
This article studies deep neural network expression rates for optimal stopping problems of discrete-time Markov processes on high-dimensional state spaces. A general framework is established in which the value function and continuation value of an optimal stopping problem can be approximated with error at most ε by a deep ReLU neural network of size at most κdqε−r. The constants κ,q,r≥0 do not depend on the dimension d of the state space or the approximation accuracy ε. This proves that deep neural networks do not suffer from the curse of dimensionality when employed to approximate solutions of optimal stopping problems. The framework covers for example exponential Lévy models, discrete diffusion processes and their running minima and maxima. These results mathematically justify the use of deep neural networks for numerically solving optimal stopping problems and pricing American options in high dimensions.
Journal Article
Exact Simulation of the First-Passage Time of Diffusions
2019
Since diffusion processes arise in so many different fields, efficient technics for the simulation of sample paths, like discretization schemes, represent crucial tools in applied probability. Such methods permit to obtain approximations of the first-passage times as a by-product. For efficiency reasons, it is particularly challenging to simulate directly this hitting time by avoiding to construct the whole paths. In the Brownian case, the distribution of the first-passage time is explicitly known and can be easily used for simulation purposes. The authors introduce a new rejection sampling algorithm which permits to perform an exact simulation of the first-passage time for general one-dimensional diffusion processes. The efficiency of the method, which is essentially based on Girsanov’s transformation, is described through theoretical results and numerical examples.
Journal Article