zondag 15 december 2013

finding file context files that do not end with a newline

file context files not ending with a newline cause annoying situations.

 #!/bin/bash --  
 # This script looks for file context files  
 # that do not end with a newline  
 for i in $(/bin/find $(printf "%s" "$REF_PATH") -type f -name "*.fc") ; do  
     [[ "$(/bin/tail -c 1 $i | /bin/tr -dc '\n' | /bin/wc -c)" -ne 1 ]] && printf "%s\n" "Fix me: $i"  
 exit $?  

maandag 9 december 2013

Another idea

We have the sepolicy tool that has functionality that aims to make policy development easier. One great benefit is the single point of failure aspect it provides. By using the tool for policy development you reduce risk of typo's and syntax errors. If you have to type everything yourself manually then much can go wrong.

The tool also has it's drawback because you are bound to the functionality the tool provides but nothing stops you from manually editing the generated policy, and so that is pretty much a non-issue.

For some reason typo's and syntax errors are a pretty common thing for many policy developers, and so from that perspective it is probably a good idea to use the tool more often.

Anyhow, The reference policy provides a api "mechanism", and api's make life easier. The issue is that these api's are not checked until they are actually called or until a tool like sepolgen-ifgen is run on them. So if one writes api's manually then those api's might not work due to some stupid typos, but the typos are often not identified until some one calls the api's.

Back earlier we had this policy of not adding api's unless they are actually used. However the point was made that audit2allow cannot suggest an api to use if its not available and so we agreed that it is probably better to add various api's even if they are not used.

So adding api's that aren't used, and that might be written manually thus contain typos and syntax errors. That means that api's that have typo's in them might not work and we don't know about it because we do not use them.

He/She who fits the shoes wears them. I make typos in unused api's often. Just a few days ago i fixed two typos in admin interfaces that i made myself, and it annoys me. Because i am the type of person that likes to manually write his policy rather than depend on a tool (even though i know the tools purpose and i appreciate the issues it solves) I guess i am just stubborn sometimes.

To keep a long story short: It might be a good idea to create an api test script. Again not an all-inclusive test but just to determine whether it can be called or not. The sepolgen-ifgen tool might be able to help identifying issues as well.

Basically a script that just calls interfaces, templates, patterns etc to see if they build.

Api's are our calling cards. It's how callers see us. If we provide broken interfaces then that leaves a bad impression. That is why i think they deserve more attention because they are not there just for us but also for others.

quick script to check for unsupported device nodes by SELinux

 #!/bin/bash --  
 # This script checks for device nodes that are unsupported by SELinux  
 # Unsupported device nodes fall back to the device_t generic type identifier for content in /dev   
 # The script just finds all chars and blocks, then looks if any of them as associated with the device_t type identifier  
 # If any device node is associated with device_t sid then the script uses matchpathcon to determine if SELinux is aware of the device node  
 # If matchpathcon thinks the device node should be associated with the device_t type then the device node is unsupported by SELinux one way or another  
 recurse_char() {  
  for i in "$1"/*;do  
   if [ -d "$i" ];then  
     recurse_char "$i"  
   elif [ -c "$i" -a ! -L "$i" ]; then  
     echo "$(ls -alZ "$i")"  
 recurse_block() {  
  for i in "$1"/*;do  
   if [ -d "$i" ];then  
     recurse_block "$i"  
   elif [ -b "$i" -a ! -L "$i" ]; then  
     echo "$(ls -alZ "$i")"  
 for s in $(recurse_char /dev); do  
      if [ "$(echo $s | /usr/bin/awk -F " " '{ print $4 }' | /usr/bin/awk -F ":" '{ print $3 }')" == "device_t" ] ; then  
           IFS=" "  
           read -r bits owner group context char <<< "$s"  
           mpc=$(/usr/sbin/matchpathcon "$char")  
           if [ "$(echo $mpc | /usr/bin/awk -F " " '{ print $2 }' | /usr/bin/awk -F ":" '{ print $3 }')" == "device_t" ] ; then  
                echo "unsupported char device: $char"  
 for s in $(recurse_block /dev); do  
      if [ "$(echo $s | /usr/bin/awk -F " " '{ print $4 }' | /usr/bin/awk -F ":" '{ print $3 }')" == "device_t" ] ; then  
           IFS=" "  
           read -r bits owner group context block <<< "$s"  
           mpc=$(/usr/sbin/matchpathcon "$block")  
           if [ "$(echo $mpc | /usr/bin/awk -F " " '{ print $2 }' | /usr/bin/awk -F ":" '{ print $3 }')" == "device_t" ] ; then  
                echo "unsupported block device: $block"  
 exit 0;  

donderdag 5 december 2013

How I think distribution maintainers can enhance quality assurance

Here is a list with things distribution maintainers can do to enhance quality assurance

1. Do proof reading.

This enhancement applies to some distributions more than others. Lets define those "active" distribution and "passive" distributions.

An active distribution is one that is constantly changing. Whereas a passive distribution is mostly "stale", and changes less frequent.

A property of active distributions is their activity. Fast development where Security policy must constantly be adapted to support these fast developments.

Maintaining policy for such scenarios is just hard work often. Things must be "fixed" quick and often. This is a recipe for human error.

Because often in the rush one tends to make assumptions, plain types and syntax errors, or other relatively simple mistakes.

Proof reading commits, and a fresh pair of eyes can save time here. By just spending a little day each day reviewing new commits such errors can be identified and fixed quickly.

Some of this proof reading requires humans for example identifying mistakes due to assumptions. Other proof reading can be automated, for example typos and syntax errors.

These human mistakes are often relatively harmless but sometimes they are harmful. The point i am trying to make is that this can be easily prevented.

For the record, i am not suggesting that this proof reading is for identifying complex issues. Because that is a but more complicated and probably takes a little more time

No instead i am suggesting that this method can filter out obvious errors. I can tell you from experience that this pay's off. Besides how hard is it to spend a little time regularly running a couple scripts on new commits to spot typos, syntax errors, and to spend five minutes a day reviewing yesterdays commits.

2. Test you policy for all common scenarios.

Policy can be built with various options. Distributions often do not use all these options. Problems can arise when you do not test building the policy with all options. This might be irrelevant in the short term because i head distribution maintainers think, if it works for us then why bother?

There are two reasons for that:

A. Your priorities might not align with the priorities of upstream, and for your own sake it is better to work with upstream. The more you divert the harder it gets to maintain your project. Merging new upstream releases becomes harder and take more time. By just making sure things work for all scenarios you increase chances of your changes being applied upstream. If you policy only works with a subset of options than upstream has little choice than to refuse the patch. To waste your time, upstreams time. its inefficient.

B. Besides. We know what SELinux is right. What defines SELinux. Its the flexibility and configurability. That's why i use SELinux as opposed to any of the other LSM-Based MAC systems. So as a distro maintainer i would actually advertise these properties. One obvious and easy way to do that is to make sure you project builds and installs in all common scenarios.

Sure there are limits to what you can support. Lets just use the same limits are upstream since we depend on eachother.

So, every once in a while. build your policy with different sets op options, to make sure that it builds. Also You might want to every onces in a while install it with various options that way you can identify bugs in the user space component.

Because this is not only an issue with policy, its also an issue with user space. User space must support all common policy and all its options.

3. Make sure that what you target at least works in common configurations.

If you target a daemon then it might be good to write a simple test for some of the common configurations. I am not suggesting all-inclusive tests. Just some default test to make sure the most obvious functionality works. It obviously requires some investments, at least initially but it might just mean the difference between a good or a bad impression.

4. Test your security goals.

Good, the processes you target can function and you have a way to verify that. But what about security? thats what we do all this for. Simple changes to the policy can have huge effects. By defining your security goals, and crucial properties of your security identifiers, you make it possible to verify that they meet your requirements over time. Run simple tests. We have great tools for that.
This can be automated perfectly

5. Other processes and SELinux

In essence SELinux is transparent to the user space. However user space components can be made aware of SELinux or can even be expected to manage parts of SELinux. For example setting security contexts on files. These programs are often written by parties that might not understand SELinux principles and concepts as well as some of us do.

This can affect the SELinux experience as a whole so its in our interest that these processes make the right decisions. One common mistakes is processes hardcoding security identifiers. Generally harmless but it harms experience and it can easily be prevented by loading a bare (dummy) policy and then checking dmesg or whatever to see if some processes are trying to use identifiers that do not exist. There are more conceivable tests imaginable.

These are just some ways distribution maintained can enhance quality assurance. Many of these point apply to both the security policy component as well as the user space component

Can SELinux be made simpler?

Disclaimer: this is just my opinion

This must be a trick question. There is no easy answer for this.

First we should define SELinux to be able to put the question into a context

SELinux has three components: The LSM-Based system in the kernel, the tools and libraries, and lastly security policy

We also need to determine what defines SELinux in compared to other LSM-Based systems in the kernel

The answer to the latter question is Flexibility and configurability. Thats what defines SELinux. It a main property of SELinux. There are other LSM-Based system with other properties for example SMAC (simple mandatory access control) That system is defined by simplicity (but needless to say it lacks flexibility/configurability, at least compared to SELinux)

It is easy to translate flexibility to complexity but the end result does not always have to be relatively complex

Another thing to consider are constraints, with that i mean that for example SELinux must be optional, additional. It must be an add-on to the existing Linux DAC framework

This prerequisite brings some more "complexity", There 3 more of those prerequisites (one of them is not optional for SELinux to work). To enforce mandatory integrity processes must be able to change type (role base access control).

The other two prerequisites are optional (must be able to compartmentalise, must be able to enforce confidentiality) (multi category security/multi level security)

Back to the question. We now have most considerations briefly explained.

I Believe The core of SELinux (the LSM-Based System) cannot be made simpler. It is already as simple as can be considering all the requirrments.
Can the tools and libraries make the experience simpler? Maybe a bit, but the main concepts and principles will still apply there is little one can do about that.

The last component is what most people experience: the security policy

So Lets look at the state of that. The policy can be build with the minimal set of requirements (e.g. only enforce integrity optionally/additionally e.g. by complementing traditional DAC)

So we already can exclude some complexity and we do. We have different policy models for different requirements. For example if you have confidentiality requirements, which adds complexity, then you need the mls policy model.

Lets look at a operating system: Fedora currently is a single distro that can basically considered general purpose. You can use it as a workstation, server, or your use case can be even more specific.
But a vendor does not know beforehand how you are going to use it, and so every scenario must be supported.

Here is the dilemma: security is never general. If you want to implement mandatory integrity/confidentiality in a flexible/configurable way by complementing traditional DAC on a general purpose system then you have a complex challenge on your hand which requires complex policy.

So compromises were made: Lets not enforce mandatory integrity on the whole system, but only a part of it. Its give and take. Fine now things are a bit simpler but its not cheap. Now user sessions by default are not contained. To me this is already a great compromise because it basically says, were leaving workstation scenarios out in the cold with regard to mandatory integrity in the user space

fortunately some level of mandatory integrity in the user space is still optionally available, but its a compromise and it does not get the attention that i think it deserves

So its all about compromises. We have some ability to make things "simpler" and we already apply it.

So how to make things (a bit) simpler and actually achieve security goals?

We could create "profiles". For example Fedora webs erver edition. is a profile for a web server. web servers have (relatively) specific properties. So that from a security perspective enables us to implement a relatively specific security policy for that profile. We just target the web server. This makes the policy "simpler"

Fedora Desktop edition has a policy that targets the user space.

Does this solve anything? Not really if you still end up focussing on specific cases. it might help a bit but its still a compromise. Unless of course you really commit to all the profules you defined.

There are more (less obvious) options but its all give and take, and its all with regard to the security policy implemented. The core of SELinux cannot be simplified ( with any significance) without SELinux losing its identity.

So what about setroubleshoot? That is a great success story when it comes to simplifying SELinux?! Not really. Its just a buffed audit2why with a lot of bloat. (no offence) Is still a program that cannot make security decisions for you. All it can do is parse. reference and make it easy to report issues. It does not know when to transition or not.

That is not because its a bad piece of software. its just that security policy is not something that is fixed so you cant hard-code solutions. and to put things into context and determine the security requirements and act on it requires a huge base of logic. Is it even worth trying to create that?

Think about it from a security perspective: do you easily trust a third party with security decisions that affect your lively hood? Do you trust some code to make security decisions for you?

And even if you do, youll miss out on the good still: configurability/flexibility.

Why not instead focus on learning the core principles/concepts, and enable people to implement mandatory integrity that is tailored to their specific requirement?

So how do you do that? CIL is a abstraction language/ compiler that aims to make this easier. Its is relatively simple and it can be used as a layer between a policy management program and the SElinux policy language. It makes things less pain full but still wont change the core principles and concepts.

What it does do is optimise. Optimization can, i guess, in some ways be translated to making things more pleasurable and there by perceivably simpler.

You ask me the question can SELinux be made simpler. my answer is no. but things can be improved and optimized. compromises can also be made

But it does not change anything to the core principles and concepts.

If you want (relatively) simple: look into Simple mandatory access control. You want "flexibility/configurability" embrace selinux

For example on the one hand we make things "simpler" by by default not targeting user sessions, But on the other hand we enable an optional security model that adds a layer of complexity to achieve compartmentalization.

Its all a matter of priorities