diff --git a/src/analysis/WhamHistogram.cpp b/src/analysis/WhamHistogram.cpp
index b10960bf4ee06536007ff57098842714ca910181..7131c4fb4513c0736cb1624c831f7964c0748a04 100644
--- a/src/analysis/WhamHistogram.cpp
+++ b/src/analysis/WhamHistogram.cpp
@@ -30,18 +30,18 @@ namespace analysis {
 This can be used to output the a histogram using the weighted histogram technique
 
 This shortcut action allows you to calculate a histogram using the weighted histogram
-analysis technique.  For more detail on how this the weights for configurations are 
+analysis technique.  For more detail on how this the weights for configurations are
 computed see \ref REWEIGHT_WHAM
 
 \par Examples
 
 The following input can be used to analyse the output from a series of umbrella sampling calculations.
-The trajectory from each of the simulations run with the different biases should be concatenated into a 
+The trajectory from each of the simulations run with the different biases should be concatenated into a
 single trajectory before running the following analysis script on the concetanated trajectory using PLUMED
 driver.  The umbrella sampling simulations that will be analysed using the script below applied a harmonic
 restraint that restrained the torsional angle involving atoms 5, 7, 9 and 15 to particular values.  The script
 below calculates the reweighting weights for each of the trajectories and then applies the binless WHAM algorithm
-to determine a weight for each configuration in the concatenated trajectory.  A histogram is then constructed from 
+to determine a weight for each configuration in the concatenated trajectory.  A histogram is then constructed from
 the configurations visited and their weights.  This histogram is then converted into a free energy surface and output
 to a file called fes.dat
 
diff --git a/src/analysis/WhamWeights.cpp b/src/analysis/WhamWeights.cpp
index be6c0645b7f6c473e7a23b88202f58a5e641eb1b..c0744108d7fb2d928412cc5c851621fb49828b37 100644
--- a/src/analysis/WhamWeights.cpp
+++ b/src/analysis/WhamWeights.cpp
@@ -35,7 +35,7 @@ analysis technique.  For more detail on how this technique works see \ref REWEIG
 \par Examples
 
 The following input can be used to analyse the output from a series of umbrella sampling calculations.
-The trajectory from each of the simulations run with the different biases should be concatenated into a 
+The trajectory from each of the simulations run with the different biases should be concatenated into a
 single trajectory before running the following analysis script on the concetanated trajectory using PLUMED
 driver.  The umbrella sampling simulations that will be analysed using the script below applied a harmonic
 restraint that restrained the torsional angle involving atoms 5, 7, 9 and 15 to particular values.  The script
diff --git a/src/bias/ReweightWham.cpp b/src/bias/ReweightWham.cpp
index 6c1044d283aab44040e530b19f4a74c8887ed687..01aa42b676be6e42a0832fa564ad62f1feca1975 100644
--- a/src/bias/ReweightWham.cpp
+++ b/src/bias/ReweightWham.cpp
@@ -27,28 +27,28 @@
 /*
 Calculate the weights for configurations using the weighted histogram analysis method.
 
-Suppose that you have run multiple \f$N\f$ trajectories each of which was computed by integrating a different biased Hamiltonian. We can calculate the probability of observing 
+Suppose that you have run multiple \f$N\f$ trajectories each of which was computed by integrating a different biased Hamiltonian. We can calculate the probability of observing
 the set of configurations during the \f$N\f$ trajectories that we ran using the product of multinomial distributions shown below:
 \f[
 P( \vec{T} ) \propto \prod_{j=1}^M \prod_{k=1}^N (c_k w_{kj} p_j)^{t_{kj}}
 \label{eqn:wham1}
 \f]
-In this expression the second product runs over the biases that were used when calculating the \f$N\f$ trajectories.  The first product then runs over the 
-\f$M\f$ bins in our histogram.  The \f$p_j\f$ variable that is inside this product is the quantity we wish to extract; namely, the unbiased probability of 
+In this expression the second product runs over the biases that were used when calculating the \f$N\f$ trajectories.  The first product then runs over the
+\f$M\f$ bins in our histogram.  The \f$p_j\f$ variable that is inside this product is the quantity we wish to extract; namely, the unbiased probability of
 having a set of CV values that lie within the range for the jth bin.
 
-The quantity that we can easily extract from our simulations, \f$t_{kj}\f$, however, measures the number of frames from trajectory \f$k\f$ that are inside the jth bin.  
+The quantity that we can easily extract from our simulations, \f$t_{kj}\f$, however, measures the number of frames from trajectory \f$k\f$ that are inside the jth bin.
 To interpret this quantity we must consider the bias that acts on each of the replicas so the \f$w_{kj}\f$ term is introduced.  This quantity is calculated as:
 \f[
-w_{kj} = 
+w_{kj} =
 \f]
-and is essentially the factor that we have to multiply the unbiased probability of being in the bin by in order to get the probability that we would be inside this same bin in 
-the kth of our biased simulations.  Obviously, these \f$w_{kj}\f$ values depend on the value that the CVs take and also on the particular trajectory that we are investigating 
+and is essentially the factor that we have to multiply the unbiased probability of being in the bin by in order to get the probability that we would be inside this same bin in
+the kth of our biased simulations.  Obviously, these \f$w_{kj}\f$ values depend on the value that the CVs take and also on the particular trajectory that we are investigating
 all of which, remember, have different simulation biases.  Finally, \f$c_k\f$ is a free parameter that ensures that, for each \f$k\f$, the biased probability is normalized.
 
-We can use the equation for the probablity that was given above to find a set of values for \f$p_j\f$ that maximizes the likelihood of observing the trajectory. 
-This constrained optimization must be performed using a set of Lagrange multipliers, \f$\lambda_k\f$, that ensure that each of the biased probability distributions 
-are normalized, that is \f$\sum_j c_kw_{kj}p_j=1\f$.  Furthermore, as the problem is made easier if the quantity in the equation above is replaced by its logarithm 
+We can use the equation for the probablity that was given above to find a set of values for \f$p_j\f$ that maximizes the likelihood of observing the trajectory.
+This constrained optimization must be performed using a set of Lagrange multipliers, \f$\lambda_k\f$, that ensure that each of the biased probability distributions
+are normalized, that is \f$\sum_j c_kw_{kj}p_j=1\f$.  Furthermore, as the problem is made easier if the quantity in the equation above is replaced by its logarithm
 we actually chose to minimise
 \f[
 \mathcal{L}= \sum_{j=1}^M \sum_{k=1}^N t_{kj} \ln c_k  w_{kj} p_j + \sum_k\lambda_k \left( \sum_{j=1}^N c_k w_{kj} p_j - 1 \right)
@@ -60,10 +60,10 @@ p_j & \propto \frac{\sum_{k=1}^N t_{kj}}{\sum_k c_k w_{kj}} \\
 c_k & =\frac{1}{\sum_{j=1}^M w_{kj} p_j}
 \end{aligned}
 \f]
-which can be solved by computing the \f$p_j\f$ values using the first of the two equations above with an initial guess for the \f$c_k\f$ values and by then refining 
-these \f$p_j\f$ values using the \f$c_k\f$ values that are obtained by inserting the \f$p_j\f$ values obtained into the second of the two equations above.  
+which can be solved by computing the \f$p_j\f$ values using the first of the two equations above with an initial guess for the \f$c_k\f$ values and by then refining
+these \f$p_j\f$ values using the \f$c_k\f$ values that are obtained by inserting the \f$p_j\f$ values obtained into the second of the two equations above.
 
-Notice that only \f$\sum_k t_{kj}\f$, which is the total number of configurations from all the replicas that enter the jth bin, enters the WHAM equations above.  
+Notice that only \f$\sum_k t_{kj}\f$, which is the total number of configurations from all the replicas that enter the jth bin, enters the WHAM equations above.
 There is thus no need to record which replica generated each of the frames.  One can thus simply gather the trajectories from all the replicas together at the outset.
 This observation is important as it is the basis of the binless formulation of WHAM that is implemented within PLUMED.
 
diff --git a/src/gridtools/GridVessel.cpp b/src/gridtools/GridVessel.cpp
index 6d639faea035cedd599baf464bb44236d3f7d02c..fc8693cbe61121d07707577b21a5dccabe03836d 100644
--- a/src/gridtools/GridVessel.cpp
+++ b/src/gridtools/GridVessel.cpp
@@ -97,9 +97,9 @@ void GridVessel::setBounds( const std::vector<std::string>& smin, const std::vec
     if( spacing.size()==dimension && binsin.size()==dimension ) {
       if( spacing[i]==0 ) nbin[i] = binsin[i];
       else {
-          double range = max[i] - min[i]; nbin[i] = std::ceil( range / spacing[i]);
-          // This check ensures that nbins is set correctly if spacing is set the same as the number of bins
-          if( nbin[i]!=binsin[i] ) plumed_merror("mismatch between input spacing and input number of bins");
+        double range = max[i] - min[i]; nbin[i] = std::ceil( range / spacing[i]);
+        // This check ensures that nbins is set correctly if spacing is set the same as the number of bins
+        if( nbin[i]!=binsin[i] ) plumed_merror("mismatch between input spacing and input number of bins");
       }
     } else if( binsin.size()==dimension ) nbin[i]=binsin[i];
     else if( spacing.size()==dimension ) nbin[i] = std::floor(( max[i] - min[i] ) / spacing[i]) + 1;