Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tidiness #40

Open
6 tasks
feiyu-chen96 opened this issue Feb 26, 2020 · 0 comments
Open
6 tasks

Tidiness #40

feiyu-chen96 opened this issue Feb 26, 2020 · 0 comments

Comments

@feiyu-chen96
Copy link
Collaborator

feiyu-chen96 commented Feb 26, 2020

Things that should be cleaned up:

  • how the library is broken into submodules and classes: Submodules like ode, pde, pde_spherical, tdim(time-dependent PDEs under development) are not mutually exclusive. Many of the functions and classes share very similar logic (e.g. ode.solve_system and pde.solve2D_system). This amplifies the work needed to make changes (e.g. every time I change pde.solve2D_system, I almost always need to make similar changes to ode.solve_system as well). We may want to extract these logics.
  • function signatures are not carefully thought out:
    • keywords whose values can potentially contradict each other, e.g. What if in pde.solve2D_system, the domain indicated by xy_min and xy_max is different from that indicated by train_generator?
    • keywords that come from different abstraction level, e.g. in pde.solve2D_system, there are keywords that come from the realm of differential equations (higher level) and keywords that come from the realm of deep learning (lower level). Maybe it's just me but that feels uncomfortable.
    • keywords that always appear together, e.g. nets and single_net in pde submodule. This suggests that they really should be made into one parameter object.
    • the names and order of keywords are not consistent across the package.
  • conversion between numpy.array and torch.tensor: numpy.array and torch.tensor are converted to each other everywhere and I find myself checking whether a sequence is a numpy.array or torch.tensor all the time. This makes working on / using the code less pleasant. Since the only reason numpy.array is there is because of matplotlib.pyplot and numpy.linalg, we should try to limit the use of numpy.array only to those sections and use torch.tensor in other places. Also, numpy.array and torch.tensor use different default precision, conversion can potentially introduce error.
  • the use of torch.tensor:
    • I was not mindful of the memory footprint of torch.tensor when I wrote the code, there may be cases where requires_grad is set to True when it really should be False. there may also be cases where we can reuse a tensor yet it is re-created.
    • I was not mindful when reshaping torch.tensor. I'm not sure the impact on performance.
  • naming things:
    • non-standard format: function names should not have uppercase letters (e.g.solve2D); upper case variable names are not reserved for constants and enumerations.
    • names that give false impressions: e.g. ExampleGenerator is not a python generator.
    • confusing names: e.g. What is an additional_loss? What is an FCNN? Why are there t_0 and t_1 in ode.DirichletBVP which has nothing to do with time?
  • the use of strings as categorical variables: should have used Enum instead, in other cases (e.g. the as_type keyword of pde.Solution.__call__) should have used a bool instead.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant