Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documentation: Fix default values #3562

Merged
merged 1 commit into from
Sep 30, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions Docs/sphinx_documentation/source/GPU.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1738,14 +1738,14 @@ by "amrex" in your :cpp:`inputs` file.
+----------------------------+-----------------------------------------------------------------------+-------------+----------+
| | Description | Type | Default |
+============================+=======================================================================+=============+==========+
| use_gpu_aware_mpi | Whether to use GPU memory for communication buffers during MPI calls. | Bool | False |
| | If true, the buffers will use device memory. If false, they will use | | |
| | pinned memory. In practice, we find it is usually not worth it to use | | |
| | GPU aware MPI. | | |
| use_gpu_aware_mpi | Whether to use GPU memory for communication buffers during MPI calls. | Bool | 0 |
| | If true, the buffers will use device memory. If false (i.e., 0), they | | |
| | will use pinned memory. In practice, we find it is not always worth | | |
| | it to use GPU aware MPI. | | |
+----------------------------+-----------------------------------------------------------------------+-------------+----------+
| abort_on_out_of_gpu_memory | If the size of free memory on the GPU is less than the size of a | Bool | False |
| abort_on_out_of_gpu_memory | If the size of free memory on the GPU is less than the size of a | Bool | 0 |
| | requested allocation, AMReX will call AMReX::Abort() with an error | | |
| | describing how much free memory there is and what was requested. | | |
+----------------------------+-----------------------------------------------------------------------+-------------+----------+
| the_arena_is_managed | Whether :cpp:`The_Arena()` allocates managed memory. | Bool | False |
| the_arena_is_managed | Whether :cpp:`The_Arena()` allocates managed memory. | Bool | 0 |
+----------------------------+-----------------------------------------------------------------------+-------------+----------+
2 changes: 1 addition & 1 deletion Docs/sphinx_documentation/source/InputsPlotFiles.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ as whether a plotfile should be written out immediately after restarting a simul
| plot_int | Frequency of plotfile output; | Int | -1 |
| | if -1 then no plotfiles will be written | | |
+---------------------+-----------------------------------------------------------------------+-------------+-----------+
| plotfile_on_restart | Should we write a plotfile when we restart (only used if plot_int>0) | Bool | False |
| plotfile_on_restart | Should we write a plotfile when we restart (only used if plot_int>0) | Bool | 0 (false) |
+---------------------+-----------------------------------------------------------------------+-------------+-----------+
| plot_file | Prefix to use for plotfile output | String | plt |
+---------------------+-----------------------------------------------------------------------+-------------+-----------+
2 changes: 1 addition & 1 deletion Docs/sphinx_documentation/source/LinearSolvers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -565,7 +565,7 @@ The following parameter should be set to True if the problem to be solved has a
In this case, the solution is only defined to within a constant. Setting this parameter to True
replaces one row in the matrix sent to hypre from AMReX by a row that sets the value at one cell to 0.

- :cpp:`hypre.adjust_singular_matrix`: Default is False.
- :cpp:`hypre.adjust_singular_matrix`: Default is false.


The following parameters can be set in the inputs file to control the choice of preconditioner and smoother:
Expand Down
4 changes: 2 additions & 2 deletions Docs/sphinx_documentation/source/Particle.rst
Original file line number Diff line number Diff line change
Expand Up @@ -713,7 +713,7 @@ with OpenMP, the first thing to look at is whether there are enough tiles availa
+-------------------+-----------------------------------------------------------------------+-------------+-------------+
| | Description | Type | Default |
+===================+=======================================================================+=============+=============+
| do_tiling | Whether to use tiling for particles. Should be on when using OpenMP, | Bool | False |
| do_tiling | Whether to use tiling for particles. Should be on when using OpenMP, | Bool | false |
| | and off when running on GPUs. | | |
+-------------------+-----------------------------------------------------------------------+-------------+-------------+
| tile_size | If tiling is on, the maximum tile_size to in each direction | Ints | 1024000,8,8 |
Expand All @@ -739,7 +739,7 @@ problems with particle IO, you could try varying some / all of these parameters.
| datadigits_read | This for backwards compatibility, don't use unless you need to read | Int | 5 |
| | and old (pre mid 2017) AMReX dataset. | | |
+-------------------+-----------------------------------------------------------------------+-------------+-------------+
| use_prepost | This is an optimization for large particle datasets that groups MPI | Bool | False |
| use_prepost | This is an optimization for large particle datasets that groups MPI | Bool | false |
| | calls needed during the IO together. Try it seeing poor IO speeds | | |
| | on large problems. | | |
+-------------------+-----------------------------------------------------------------------+-------------+-------------+
Expand Down
Loading