13 may 2017

Incompatibilidad entre Autofirma 1.4.x / 1.5.x y Google Chrome 58

El problema


Hace ya 1-2 semanas me ha dejado de funcionar Autofirma con Google Chrome bajo OS X.

Mirando la consola en las herramientas para desarrolladores he podido encontrar el siguiente error: net::ERR_RESPONSE_INSECURE. Al intentar conectar rápidamente con Google Chrome a https://127.0.0.1:<Puerto indicado en mensaje de error> ya que Autofirma se cierra después de un corto tiempo, y activando el panel de seguridad, obtengo finalmente​ más información: commonName matching error.

Googleando un poco, me he encontrado con este interesante thread: https://groups.google.com/a/chromium.org/forum/m/#!topic/security-dev/IGT2fLJrAeo

Resumen: Si un certificado no tiene la extensión subjectAltName mal asunto, porque se dejará de soportar el hostame contra el campo commonName. La justificación es que usar el campo commonName para decidir si el hostame solicitado se corresponde con el certificado puede ser ambiguo, ya que se emplea tanto para IPs como para nombres de dominio.

Es justo lo que pasa al certificado generado de forma automática durante la instalación de Autofirma.

Soluciones

  • Usar otro navegador: Hasta ahora Firefox me sigue funcionando bien.
  • Forzar el uso de servidor intermedio. Esta es la opción usada por nosotros, ya que la conexión directa entre Autofirma y el navegador nos ha dado muchos problemas ya (Antivirus, OS X 10.11).
Esperemos que pronto salga una versión nueva que resuelva el problema.



1 may 2017

hg share: Sharing mercurial repository between different clones / checkouts

Our starting point

At our company, we developed a product based on Django. To manage code changes, we use mercurial and to manage all dependency stuff we use buildout + setuptools. buildout recipes are wonderful if you need to do other things than just pulling code and resolving and building library dependencies. These thing could be:
  • Building any binary from source. We use it for building Nginx, part of product
  • Generating config files. We used it for generating configuration files for nginx, supervisor, etc.
  • Generating SSL Certificates
  • etc.
Our deployments use a shared product base with its mercurial repo and customer specific project customizations which are held on separate mercurial repos. Managing changes with mercurial (or any other SCM system) allows us to:
  • Deploy quickly any hot fixes
  • Share code changes easily merging or "grafting" changesets between branches.
  • Get exhaustive change history information.

The problem

When working on several projects at the same time, it's not easy to share the same "buildout" project because each one has it's own settings, customizations, and so on. That led me to have a copy for each customer. Each buildout is about 1GB.

As the number of customers rise, the space required to hold all buildouts is getting quite big.

The solution

Using shared mercurial repository

A mercurial repository can be divided into:
  • a history tracking store where all changesets reside
  • the state which is basically a pointer to an entry in the history
  • a local copy, which hold any changes which are not commited
The store can be shared between several clones / checkouts / repositories. This is just what the mercurial share extension (hg share) does. 

The syntax is similar to the "hg clone" command: hg share <local source repo> [<dest name>]

One of the advantages is that a change is directly visible to each clone. This saves a lot of pulls. But care should be taken, because strips / rollbacks apply to them all. This could leave a repository pointing to a non-existing (anymore) state.

Using shared eggs and download-cache directories

These directories hold nearly the same info between different buildouts, so it's easy to share them. The solution I used is to simply use symbolic links to some globally shared directories. Another solution would be to specify specific eggs and download-cache directories inside buildout parameters (eg. using a "develop.cfg" invoked from "buildout.cfg" which inherits from a "base.cfg").

A + B

I worked out a little script which replaces automatically each mercurial repository with a shared one and unifies the eggs and the download-cache directories.

Applying both changes to each of my buildouts reduces them by more than 65% including the shared part of eggs and download-cache. This is quite a good saving.

Fast PDF scaling with page numbering under Ubuntu

The problem

We want a backend process to scale PDF files and number pages. Currently, wer'e using some Java code bases on the last LGPL iText version (2.1.7) which does PDF scaling and stamping. But the code includes some features for custom output formatting (text tables, barcodes) for footers and margins written in Java, so that only software developers have the knowledge to customize and recompile the code. Wouldn't it be nicer if the customer could customize these output formats directly?

What we need:
  • PDF stamping feature
  • Page numbering
  • Page scaling
We've used PyPDF2 and xhtml2pdf in the past, but it may be too slow for big documents.

The proposed solution

pdfjam is a package with a bunch of scripts for pdf manipulation based on pdflatex / pdftext command line included in Tex Live Binaries packge. On Ubuntu, you can get it from the standard repositories.

Scaling


The following command line scales a PDF input file:

pdfjam --scale 0.9 --outfile output.pdf input.pdf

It's very quick. On my machine it takes less than 1s for a 120 page 2.1MB PDF file.

Page Numbering

With some additions we can generate page numbers. Note: the following command should be a one-liner:

pdfjam  --preamble '\usepackage{fancyhdr} \topmargin 85pt \oddsidemargin 140pt \pagestyle{fancy} \rfoot{\Large\thepage} \cfoot{} \renewcommand {\headrulewidth}{0pt} \renewcommand {\footrulewidth}{0pt} '  --pagecommand '\thispagestyle{fancy}' --scale 0.9 --outfile output.pdf input.pdf

This is still very quick. Some explanations go here:
  • The preamble argument is just the text which goes into the .tex command file fed to pdlatex just before the "\begin{document}" part.
  • The --pagecommand is an additional argument which goes into the "\includepdfmerge" command
  • If you want to have a look into the generated .tex command file, add --no-tidy to the command line.
  • The "topmargin" and "oddsidemargin" are set for A4 page size. You may experiment with your own preferences.

Page Numbering with the "{page} of {pages}" format

If we would like to write out page numbers like this, we need the lastpage Tex package. Now the pdflatex command (called from pdfjam) must be invoked twice. This requires changing the pdfjam shell script. Just replace the line:

$pdflatex $texFile > $msgFile || {

with something like this:

$pdflatex $texFile > $msgFile && if grep 'xdef' $auxFile > /dev/null ; then $pdflatex $texFile >> $msgFile ; fi || {

i.e.: If the aux file contains any xdef definition, we'll do second pass.

For Ubuntu, the lastpage Tex Live package is included in the texlive-latex-extras package. If you don't want to install the recommended documentation, you could run the following command:

sudo apt-get install --no-install-recommends texlive-latex-extra

Now, let's change the page numbering format:

pdfjam --preamble '\usepackage{fancyhdr} \usepackage{lastpage} \topmargin 85pt \oddsidemargin 140pt \pagestyle{fancy} \rfoot{\Large\thepage\ of \pageref{LastPage}} \cfoot{} '  --pagecommand '\thispagestyle{fancy}' --scale 0.9 --outfile output.pdf input.pdf

This doubles the time required to generate the document, but still 1.8s for my 120 pages document.


10 ene 2016

Escaping single quotes in bash

The problem

We want to define an alias for deleting python .pyc files. The alias definition:
alias rmpyc="find . -name '*.pyc' -delete
does not work correctly

Solution

Combining the bash rules for quoting:
  • Any variable and pattern is escaped whenever it is NOT enclosed within single quotes
  • Backslash escaped single quotes NOT enclosed within single quotes produce literal single quotes
  • Expressions with different quotes can be directly combined, e.g.:
echo "Double quotes: "'"'' and single quotes: '"'" 
Applying both rules we can write:
1. alias rmpyc='find . -name '"'"'*-pyc'"'"' -delete'
or
2. alias rmpyc="find . -name '"'*.pyc'"' -delete"
or
3. alias rmpyc='find . -name '\''*-pyc'\'' -delete'

Detailed explanations

  1. We start using single quotes because we want to define a literal. To obtain a literal single quote, we stop quoting the first literal (find . -name) and concatenate a single quote enclosed within double quotes, then we write the file pattern within single quotes to avoid its expansion. Again, we concatenate a single quote enclosed within double quotes and finally we add the rest of the string ( -delete).
  2. We start with double quotes while there is no pattern to be expanded. This allows us to include literal single quote. After we ended the first string (after the first literal single quote), we concatenate the file pattern enclosed within single quotes to avoid file pattern expansion. Finally we add the rest of the string within double quotes (starting with the second literal single quote).
  3. We start with single quotes like in 1. until we want a literal single quote. To get it, we end the first string (with a single quote) and then we write an escaped single quote (backslash single quote). We continue with the file pattern enclosed within single quotes to avoid its expansion. Then we add the second escaped single quote like before. Finally we concatenate the rest of the string.

11 nov 2015

Hints for Java JMX monitoring (for Tomcat, Alfresco, Liferay, etc.)

The problem

We want to monitor an Alfresco server which is not directly accesible from the outer world. It sits inside a (VMWare) private virtual network behind a firewall.

Have you ever tried to access JMX in private virtual nets behind a firewall? 

It's not easy at all, because of the way JMX connection establishment works: The client connects to a well know RMI registry host:port. If no additional variables are set, the Java VM does these things:
1. Guess it's own IP, based on the hostname and /etc/hosts.
2. Allocate dynamically a port to receive "server" connections
3. Send the data to the client, so it can do the connection.

In our scenario (not directly reachable Alfresco server in a private virtual network), this is a real nightmare.

Here are some solutions.

Solution 1: Create / use VPN

I don't have this solution at hand, so I'll jump to the next one.

Solution 2: Fix and expose JMX ports to the outer world (protected by firewall)

1. Download Apache-Tomcat's extra catalina-jmx-remote.jar for your version of Tomcat and drop it into the tomcat/lib folder

2. Add to tomcat/conf/server.xml something like this:

<Listener className="org.apache.catalina.mbeans.JmxRemoteLifecycleListener" rmiRegistryPortPlatform="8555" rmiServerPortPlatform="8556"/>

3. Add the following variables to tomcat/bin/setenv.sh (or tomcat/scripts/ctl.sh, in case of Alfresco):
CATALINA_OPTS="$CATALINA_OPTS -Dcom.sun.management.jmxremote "
CATALINA_OPTS="$CATALINA_OPTS -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false"
CATALINA_OPTS="$CATALINA_OPTS -Djava.rmi.server.hostname=`hostname`"

4. Open your firewall for the given ports and source IPs.
Notes:
  • The java.rmi.server.hostname value is sent verbatim to the client. This makes it possible that the hostname resolves at the server to one IP and at the client to another.
  • We disable SSL, we're supposing that the access is protected  through firewall.
  • We suppose that the firewall maps the ports for the outer world to the server in out private network
  • In most articles we can find in Internet,  java.rmi.server.hostname should be a valid IP, but this is just another reason why it's so difficult to get the right configuration. I inspected network packets with ngrep and found that the value is send verbatim.
  • At the server, the hostname value should resolve locally. When I used some outer IPs, Tomcat didn't start up correctly (taking a long long time...)

Solution 3: Use Jolokia and expose some special URLs to the outer world

Jolokia is an agent which translates JMX queries and operations to REST-HTTP/JSON. It's really easy to write a Nagios check script. I did one in Python with took something like an hour.

What I did:
1. Download the WAR from the Jolokia download page.
2. Unzip the WAR to edit web.xml
3. Modified the web.xml, uncommenting the authentication things
4. Zip the war again and drop it into the tomcat/webapps folder
5. Add a user with the "Jolokia" role to conf/tomcat-users.xml
6. Restart Tomcat
7. Test it with a browser at /jolokia/ (The browser should show an authentication dialog.)
8. Search for jolokia nagios plugins or write one.

With a little bit more of time, I modified my Nagios plugin (which I use from Shinken, not Nagios) to display all Heap Memory data into MBytes or percentage, so you can something like this:

./check_jolokia_heap -U http://......  -c 80% -w 90% -u -p

and here is an example output (should be in one line):
JMX OK HeapMemoryUsage.used=439.57{max=1185.5;init=1248.0;used=439.57;committed=1185.5}|HeapMemoryUsage.used=439.57;998.4;1123.2

Note that, although we specify -w and -c arguments in percentage the values are translated into MBytes.

If the -P flag is given the values are translated into percentages:

JMX OK HeapMemoryUsage.used=34.96%{max=1185.5;init=1248.0;used=436.32;committed=1185.5}|HeapMemoryUsage.used=34.96%;80.0%;90.0%

If you're interested, leave a comment.

Solution 4: Invoke a JMX monitoring through SSH

Before we begin, let's talk about the pros and cons:

  • Pros: You don't have to hassle with JMX configurations.
  • Cons:
    • The JMX monitoring command is invoked at the target machine. Make sure you have enough memory
    • If there is any SSH issue, the command will fail, although the JVM may work correctly
    • You need SSH, of course.


Basically, you don't have to bother about JMX ports, firewalls, etc. Just install the monitoring plugin in the target machine and invoke it through SSH.

Now, the question is, how to to invoke it automatically with no direct SSH connection? (Remember that the host is not directly accesible?)

Here you have two solutions:

a) Configure you firewall to forward SSH port to the target machine

b) Use SSH ProxyCommand: Define in the ~/.ssh/config SSH configuration of the monitoring account something like this:

# Our proxied destination host
Host destination-host

  ProxyCommand ssh intermediate-host -W %h:%p

Make sure you can reach the intermediate host without password authentication:

ssh-keygen   #if you don't have already any keypair generated
ssh-copy-id intermediate-host

Now, test you connection to the destination-host:

ssh destination-host

You should get a prompt if you trust the destination certificate's fingerprint and after that the password prompt. If everything works as expected, just copy your public key to the destination host:

ssh-copy-id destination-host

Finally, copy your monitoring plugin to the destination host and invoke it, e.g. in Nagios / Shinken, your command definition could be something like this

define command {
    command_name    check_tomcat_mem_heap
    command_line    $NAGIOSPLUGINSDIR$/check_jmx \
        -U service:jmx:rmi:///jndi/rmi://'$HOSTADDRESS$':'$ARG1$'/jmxrmi \
        -O java.lang:type=Memory -A HeapMemoryUsage \
        -K used -w '$ARG2$' -c '$ARG3$
}

27 may 2014

Liferay 6 and Sentry

Introduction

Sentry [1] is a great tool for error tracking and Liferay [2] is a very popular portal software that we deploy for out customer as part of our main product.

Log4j configuration with Liferay

As stated in [3], custom log4j configuration is done adding these files:
  • portal-log4j-ext.xml
  • log4j.dtd
to folder: tomcat-[version]/webapps/ROOT/WEB-INF/classes/META-INF where [version] is something like 7.0.23 and depends on the concrete Liferay version you are working with.

The portal-log4j-ext.xml overrides the file portal-log4j.xml which can be found inside portal-impl.jar (in webapps/ROOT/WEB-INF/lib). You can get a copy from here [4]. The companion file log4j.dtd can be found here [5].

Get Sentry Java Client (raven-java)

To log any errors in our Liferay instance to Sentry, we need a Sentry Java Client which works together with log4j (a Log4j appender) and can be downloaded from here [7].

Although Liferay already includes log4j 1.2.x we choose the jar that includes all dependencies (for a reason I explain below):

Download the file raven-log4j-[version]-jar-with-dependencies.jar where [version] is currently: 1.0-SNAPSHOT. 

Configure Liferay to log to Sentry

This is done it two steps:
  1. Copy the downloaded raven-log4j JAR to tomcat-/lib
  2. Customize portal-log4j-ext.xml
Copy raven-log4j file
Copy the downloaded raven-[version]-jar-with-dependencies.jar to tomcat-[version]/lib.
Although all the java dependencies should normally live together with the web application and the downloaded file contains log4j 1.2.x, we have chosen this approach for the following reasons:

  • Then specific log4j appender and Sentry java client (raven-java) will be available for other deployed web applications. Take into account that in Liferay deployment each extra portlet is considered a separate web application with its own J2EE application context. Each of them will have to be configured individually to log to Sentry.
  • The log4j version included in raven-log4j-[version]-jar-with-dependencies.jar has the same major and minor version (1.2), i.e. the Sentry log appender (SentryAppender and AsyncSentryAppender) should be 100% compatible. Special caution should be given when using other versions.
  • The tomcat/lib contains class libraries (JAR) which are shared between all of the deployed web applications (WAR). But (!) the J2EE classloader magic avoids calls from a shared class library back to a web application class. In this case, it doesn't happen.
  • The web application classes and class libraries have preference when searching for a class
Customize portal-log4j-ext.xml
Following this example [6] where Nuxeo is configured for Sentry, we have to add an appender and activate it in portal-log4j-ext.xml, like this:
 <?xml version="1.0"?>  
 <!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">  
 <log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/">  
   <appender class="org.apache.log4j.ConsoleAppender" name="CONSOLE">  
     <layout class="org.apache.log4j.PatternLayout">  
       <param name="ConversionPattern" value="%d{ABSOLUTE} %-5p [%c{1}:%L] %m%n" />  
     </layout>  
   </appender>  
   <appender class="org.apache.log4j.rolling.RollingFileAppender" name="FILE">  
     <rollingpolicy class="org.apache.log4j.rolling.TimeBasedRollingPolicy">  
       <param name="FileNamePattern" value="@liferay.home@/logs/liferay.%d{yyyy-MM-dd}.log" />  
     </rollingpolicy>  
     <layout class="org.apache.log4j.PatternLayout">  
       <param name="ConversionPattern" value="%d{ABSOLUTE} %-5p [%c{1}:%L] %m%n" />  
     </layout>  
   </appender>  
   <appender class="net.kencochrane.raven.log4j.SentryAppender" name="Sentry">  
     <param name="dsn" value="http://[two hashes separared by a colon]@log.tangrambpm.es/5" />  
     <filter class="org.apache.log4j.varia.LevelRangeFilter">  
       <param name="levelMin" value="INFO" />  
     </filter>  
   </appender>  
   <category name="com.ecyrd.jspwiki">  
     <priority value="ERROR">  
   </priority></category>  
 ...  
   <root>  
     <priority value="INFO">  
     <appender-ref ref="CONSOLE">  
     <appender-ref ref="FILE">  
     <appender-ref ref="Sentry">  
   </appender-ref></appender-ref></appender-ref></priority></root>  
 </log4j:configuration>  

After theses steps and a Liferay restart you should be done.

References

30 dic 2013

¿Por qué los polvorones y mantecados contienen E-320?

Pregunta abierta a fabricantes y distribuidores de polvorones y mantecados y especialistas de alimentación


Hasta ayer no sabía nada sobre el antioxidante E-320, pero al leer los ingredientes de los polvorones y mantecados, y en la tónica de revisar los ingredientes de los alimentos para vigilar la alimentación en nuestra familia, me he encontrado con este ingrediente que desconocía.

Según las referencias consultadas E-320 es un antioxidante sintético utilizado en la industria industria petrolífera empleado para conservar grasas y cuyo consumo debería evitarse por la posibilidad de la aparición de los siguientes efectos adversos:

  • Hiperactividad
  • Asma
  • Urticaria
  • Insomnio
  • Aumento del colesterol en la sangre
  • Problemas de metabolismo en el hígado
  • Adormecimiento
  • Tumores cancerígenos

Aunque en Europa y USA esté permitido su uso, en Japón está prohibido.

Por lo cual me gustaría saber:

  • ¿Para qué hace falta añadir este ingrediente? ¿No es la manteca de cerdo y el azúcar suficiente para conservar estos alimentos?
  • ¿Qué alternativas existen a E-320 para el caso de polvorones y mantecados?
  • ¿Por qué no se están usando estas alternativas?

Gracias.

Atentamente,
un ciudadano a pie que le gustaría disfrutar de los dulces navideños sin preocupaciones


Referencias (todas consultadas el día 30/12/2013):

Por internet pueden encontrarse fácilmente otras referencias.